Cloud Computing Training

Add by Kevin Jackson | Jul 15, 2010 23:07  5659
Cloud Computing Training

Map Outline

Cloud Computing Training
1 Day 1 - Cloud Computing Basics
1.1 Why Cloud Computing
1.1.1 Animoto Story.mp3

Animoto uses Rightscale as their cloud infrastructure partner. Brad discusses their infamous week where Animoto went from 25k users to 700k users in 5 days and they had to scale from 50 to 5k servers.

1.1.2 NASDAQ

Nasdaq OMX has lots of stock and fund data, and it wanted to make extra revenue selling historic data for those stocks and funds. But for this offering, called Market Replay, the company didn't want to worry about optimizing its databases and servers to handle the new load. So it turned to Amazon's S3 service to host the data, and created a lightweight reader app using Adobe's AIR technology that let users pull in the required data. "If I'm someone like Nasdaq, it's a cheap experiment," says Nik Simpson, a senior analyst at the Burton Group. The traditional approach wouldn't have gotten off the ground economically, recalls Claude Courbois, an associate vice president for data products at Nasdaq: "The expenses of keeping all that data online was too high." So Nasdaq took its market data and created flat files for every entity, each holding enough data for a 10-minute replay of the stock's or fund's price changes, on a second-by-second basis. (It adds 100,000 files per day to the several million it started with, Courbois says.) The Adobe AIR app Courbois' team put together in just a couple days pulls in the flat files stored at Amazon.com and then creates the replay animations from them. The result: "We don't need a database constantly staging data on the server side. And the price is right."

1.1.3 NY Times

The New York Times also used S3 for a data-intensive project: converting 11 million articles published from the newspaper's founding in 1851 through 1989, to make them available through its Web site search engine. The Times scanned in the stories, cut up into columns to fit in the scanners (as TIFF files), then uploaded those to S3 — taking 4TB of space — over several WAN head connections from the Times' datacenter. The Times didn't coordinate the job with Amazon — someone in IT just signed up for the service on the Web using a credit card, then began uploading the data. "After about 3TB, we got an e-mail [from Amazon.com] to ask if this would be a perpetual load," recalls Derek Gottfrid, senior software architect at the Times. Then, using Amazon.com's EC2 computing platform, the Times ran a PDF conversion app that converted that 4TB of TIFF data into 1.5TB of PDF files. Using 100 Linux computers, the job took about 24 hours. Then a coding error was discovered that required the job be rerun, adding a second day to the effort -- and increasing the tab by just $240. "It would have taken a month at our facilities, since we only had a few spare PCs," Gottfrid says. "It was cheap experimentation, and the learning curve isn't steep.

1.1.4 Washington Post
1.2 What is cloud computing?
1.2.1 Definition

Definition: Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

1.2.1.1 Five key characteristics
1.2.1.1.1 Rapid Elasticity

i. Rapid Elasticity: Elasticity is defined as the ability to scale resources both up and down as needed. To the consumer, the cloud appears to be infinite, and the consumer can purchase as much or as little computing power as they need. This is one of the essential characteristics of cloud computing in the NIST definition.

1.2.1.1.2 Measured Service

ii. Measured Service: In a measured service, aspects of the cloud service are controlled and monitored by the cloud provider. This is crucial for billing, access control, resource optimization, capacity planning and other tasks.

1.2.1.1.3 On-Demand Self Service

iii. On-Demand Self-Service: The on-demand and self-service aspects of cloud computing mean that a consumer can use cloud services as needed without any human interaction with the cloud provider.

1.2.1.1.3.1 IaaS
1.2.1.1.3.1.1 Compute
1.2.1.1.3.1.2 Storage
1.2.1.1.3.2 PaaS
1.2.1.1.4 Ubiquitous Network Access

iv. Ubiquitous Network Access: Ubiquitous network access means that the cloud provider’s capabilities are available over the network and can be accessed through standard mechanisms by both thick and thin clients.

1.2.1.1.5 Resource Pooling

v. Resource Pooling: Resource pooling allows a cloud provider to serve its consumers via a multi-tenant model. Physical and virtual resources are assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).

1.2.1.2 Three Deployment Models
1.2.1.2.1 Public Cloud

i. Public Cloud: In simple terms, public cloud services are characterized as being available to clients from a third party service provider via the Internet. The term “public” does not always mean free, even though it can be free or fairly inexpensive to use. A public cloud does not mean that a user’s data is publically visible; public cloud vendors typically provide an access control mechanism for their users. Public clouds provide an elastic, cost effective means to deploy solutions.

1.2.1.2.2 Private Cloud

ii. Private Cloud: A private cloud offers many of the benefits of a public cloud computing environment, such as being elastic and service based. The difference between a private cloud and a public cloud is that in a private cloud-based service, data and processes are managed within the organization without the restrictions of network bandwidth, security exposures and legal requirements that using public cloud services might entail. In addition, private cloud services offer the provider and the user greater control of the cloud infrastructure, improving security and resiliency because user access and the networks used are restricted and designated.

1.2.1.2.3 Community Cloud

iii. Community Cloud: A community cloud is controlled and used by a group of organizations that have shared interests, such as specific security requirements or a common mission. The members of the community share access to the data and applications in the cloud.

1.2.1.2.4 Hybrid Cloud

iv. Hybrid Cloud: A hybrid cloud is a combination of a public and private cloud that interoperates. In this model users typically outsource nonbusiness- critical information and processing to the public cloud, while keeping business-critical services and data in their control.

1.2.1.3 Four Delivery models
1.2.1.3.1 Software-as-a-Service (SaaS)

The consumer uses an application, but does not control the operating system, hardware or network infrastructure on which it's running.

1.2.1.3.1.1 Salesforce.com
1.2.1.3.2 Platform-as-a-Service (PaaS)

ii. Platform as a Service (PaaS): The consumer uses a hosting environment for their applications. The consumer controls the applications that run in the environment (and possibly has some control over the hosting environment), but does not control the operating system, hardware or network infrastructure on which they are running. The platform is typically an application framework.

1.2.1.3.2.1 Google AppEngine
1.2.1.3.2.2 Force.com
1.2.1.3.2.3 Open PaaS
1.2.1.3.3 Infrastructure-as-a-Service (IaaS / HaaS)

iii. Infrastructure as a Service (IaaS): The consumer uses "fundamental computing resources" such as processing power, storage, networking components or middleware. The consumer can control the operating system, storage, deployed applications and possibly networking components such as firewalls and load balancers, but not the cloud infrastructure beneath them.

1.2.1.3.3.1 Examples
1.2.1.3.3.1.1 Amazon Web Service

Amazon played a key role in the development of cloud computing by modernizing their data centers after the dot-com bubble, which, like most computer networks, were using as little as 10% of their capacity at any one time just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" could add new features faster and easier, Amazon started providing access to their systems through Amazon Web Services on a utility computing basis in 2005.[22] This characterization of the genesis of Amazon Web Services has been characterized as an extreme over-simplification by a technical contributor to the Amazon Web Services project.[23]

1.2.1.3.3.1.2 Unisys
1.2.1.3.3.1.3 EMC Atmos
1.2.1.3.3.1.4 Loudcloud

Loudcloud, founded in 1999 by Marc Andreessen, was one of the first to attempt to commercialize cloud computing with an Infrastructure as a Service model.[20] By the turn of the 21st century, the term "cloud computing" began to appear more widely,[21] although most of the focus at that time was limited to SaaS, called "ASP's" or Application Service Providers, under the terminology of the day.

1.2.1.3.3.2 Services
1.2.1.3.3.2.1 Compute
1.2.1.3.3.2.1.1 Physical Machines
1.2.1.3.3.2.1.2 Virtual Machines
1.2.1.3.3.2.1.3 OS-level virtualization
1.2.1.3.3.2.2 Network
1.2.1.3.3.2.3 Storage
1.2.1.4 Two Domains
1.2.1.4.1 Enterprise

i. Enterprise – static, usually physical, organization leveraging the Internet for global connectivity with centralized governance and management

1.2.1.4.2 Tactical

ii. Tactical – dynamic, virtual organization typically short-lived. Leverages wireless networking technology to leverage internetworking technologies to form private or public, interoperable systems in order to accomplish a time or goal specified activity.

1.2.1.5 Cloud computing is not.
1.2.1.5.1 Grid Computing

a. Grid computing — "a form of distributed computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks"

1.2.1.5.2 Utility Computing

b. Utility computing — the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity"

1.2.1.5.3 Autonomic Computing

c. Autonomic computing — "computer systems capable of self-management"

1.2.2 Technologies & Marketplace
1.2.2.1 Commodity Hardware

a. Commodity Hardware. Chips (processors, memory, etc.), storage (mostly disc drives), and network (both within a datacenter, wide area, and wireless) – there have been large strides made in the capabilities of what is – by historical standards – throw-away equipment. For example, a client of one of the authors was able to match a competitor’s industry leading, mainframe-based performance in processing high-volume customer transaction with less than a dozen cheap commodity boxes sitting on a re-purposed kitchen rack. Total bill? Less than $10,000. Yes it works, and it works very well. The key, of course, was in how the applications were constructed and how that set of machines is reliably managed. In any case, there will be more on this example as well as others later in the book.

1.2.2.2 Network Speed

b. Network Speed. While network performance has not increased at the same rate as either processor or storage performance , huge strides have been made in both the connections within a datacenter and those outside. For example, a “gigE” network card (for use by a commodity computer within a datacenter) will be less than $10 each in small quantities. To put that in perspective, that is about 400% faster than the internal bus connections (the key internal connectivity within server computers) of the typical “big servers” of the early 1980s. A 10 Mbps wired connection for the home or office will average less than $50 per month in the United States, and even less than that in many parts of the world. Mainstream mobile wireless speeds will be closer to 7 Mbps, at the cost of only a modest part of the typical monthly cell phone budget. The point is simple: whether within the datacenter, at fixed locations throughout the world, or on mobile devices; cheap, fast, reliable, and ubiquitous network connections are a fact of life.

1.2.2.3 Virtualization

c. Virtualization. Starting as a way to share the use of very expensive mainframes between otherwise incompatible operating systems, then flowering in the later, but similar trend to consolidate large numbers of small servers (each typically dedicated to one or two specific applications), virtualization is the ability to operate particular resources (such as computers, networks, and so forth) largely independent of the physical infrastructure upon which they are deployed. This can be a tremendous boon for operations. For example, the initial configuration of the operating system for a server, along with the applications to run on that server can take hours, if not days. With virtualization that initial work is done once and the results put on the shelf, to be deployed onto physical hardware when needed. This process, sometimes referred to as “hydration”, can be done in as little as a few seconds to minutes and repeated as often as needed, thereby enabling the possibility of easily deploying basic software to large numbers of computers.

1.2.2.4 Application Architectures

d. Application Architectures. Beginning with the development of “object oriented” languages and tools in the 1980s and 1990s, and continuing on through the beginning of web services and service oriented architectures during this decade, software architectures have made many strides toward the eternal goal of software reusability, itself driven by the desire to make it easier to construct software. A key characteristic of typical cloud applications has been the fine-grained components, with an exposed API (i.e., the ability to make use of that portion of an application from nearly anywhere on the Internet ). This ability to mix and match relatively software services is crucial in making software more useful. For many, this has been the practical realization of service oriented architectures, an interesting topic that we will explore in more detail later in the book. In addition, there have been significant advances in creating more resilient, self-organizing application platforms that are inherently at home on top of the very fluid, commoditized infrastructure typical in cloud computing. Finally, the need to become more adept at parallelization in order to effectively use multi-core processors is beginning to have an impact.

1.2.2.5 Data Storage Architectures

e. Data Storage Architectures. The first two ages of computing have very much been dominated by the database systems, often relational databases such as Oracle, MySQL, SQLServer, Postgress, and others. Entire (data management) organizations exist in most enterprises to manage the structure of this relational data within these repositories; with strict rules about how such data is accessed, updated, and so forth. Unfortunately, what we have learned from abundant experience is that at some point the block to scaling any given application will nearly always be the relational database itself. As a result the whole approach to reliably storing, processing, and managing data at large scale is being re-thought, resulting in a number of innovative, novel technologies that show significant promise. We are also beginning to see some significant improvements in the underlying storage infrastructure itself, both in composition and operations.

1.2.2.6 Pervasive High Quality Access

f. Pervasive High Quality Access. The reality – quality, variety, quantity –of high quality, visually attractive, widely available devices has had a tremendous impact on the development of cloud computing. Typical devices include fixed desktops with one or more flat panels; laptops and netbooks of every size, price range, and performance; ubiquitous, sometimes specialized nearly always relatively inexpensive handheld devices such as the iPhone and it is growing range of competitors; with all these devices sharing in common a wide range of wireless high-speed Internet access. This plethora of high quality, pervasive devices have greatly increased the number of customers for services and content – the data and applications sold on the cloud – as well as increasing the appetite for even more services and data. In March 2008 Apple announced that they would create a marketplace from which third-party developers could sell applications to owners of an iPhone. Despite a tremendous amount of uncertainty – including many who thought that the whole concept would simply fizzle out for any of a number of reasons – within the first nine months Apple was able to sell 10 more than one billion individual applications. From zero to a billion in less than a year .

1.2.2.7 Culture

g. Culture. Perhaps conditioned by the expectation – quite often the reality as well – that everything is available all the time, that Google and others will be able to tell you where any place is, that you can then reach your friends (no matter the time or place) to tell them about it; perhaps all of that is simply too obvious to think about much anymore … but no matter, take a moment and ponder what this assumption, this daily reality has wrought on society. An important factor that it must be considered. After all, in some sense culture is a measure of how members of a society interact with each other, and the transition to the era of cloud computing is bringing an incalculable changes to this very arena. That in our culture – for that matter, nearly any culture around the world today – this means of communication is simply a given, that we all take it for granted is a profound enabler for future services, for future proliferation of cloud-based services. After all, this is what our cultures now expect, this what people demand, this is how people interact.

1.3 Cloud Computing History
1.3.1 First Age

i. Focus was on big infrastructure – mainframes, big point-to-point networks, centralized databases, and big batch jobs. ii. Toward the end terminals evolved into personal computers, networks went from hierarchical – with the mainframes at the center of each network – to decentralized, with a broader, generally more numerous collection of servers and storage scattered throughout an organization. While batch work still existed, many programs became interactive through this age, eventually gaining much more visual interfaces along the way. iii. Infrastructure tended to be associated with particular applications– a practice since pejoratively known as “application silos” –and important applications generally demanded enterprise-grade (read expensive) infrastructure … mainframes or big servers, and so forth. This period also saw the rise of the databases, along with the beginnings of specialized storage infrastructure upon which those databases relied.

1.3.2 Second Age

i. The rise of the Internet – Sun, Cisco, Mosaic (which became) Netscape, web 1.0, eBay, Yahoo, baby.com, and the first Internet bubble ii. Development and near-ubiquity of easy to use, visually attractive devices, devices that could be used by nearly everyone. The biggest technical contribution of the second age – was in the network itself. In being forced to deal with the possibility of massive network failures caused by a nuclear attack, researchers endowed their invention with the ability to self-organize, to seek out alternate routes for traffic, to adapt to all sorts of unforeseen circumstances. iii. The single point of failure that was typical of mainframe-inspired networks removed … and in doing so, in one fell swoop, removed the biggest technological barrier to scaling.

1.3.3 Third Age

i. Early in the second age Yahoo started “indexing the Internet”, which for some time was mostly manually constructed. While this was sufficient for awhile, it soon became apparent that manually built indices could never keep up with the growth of the Internet itself. ii. Several other indexing efforts began – including AltaVista, Google, and others – but it was Google where everything came together.

1.3.4 The Transformation
1.3.4.1 Drive for Scale

1. 1970s and 1980s—most computing was done on relatively mundane, large-scale individual computers, or perhaps in small clusters of relatively big machines. 2. mid-1980s another thread of investigation took root - in many ways inspired by biological systems themselves - which started by combining large numbers of relatively slow computers, sometimes loosely coupled via a local area network (these came to be often known as grids), sometimes internally via specialized connections (such as the exotic Connection Machine 1, produced by Thinking Machines, Inc., the whole effort being the commercialization of the doctorial work of Daniel Hillis). 3. This drive for scale went mainstream along with the Internet. This was true in many dimensions, but for one easy example just think of the indexing problem itself – whereas an early (circa 1994) Yahoo index might have had less than a hundred, or at most a few hundred entries, and could be manually created, by the beginning of 1995 the number of web sites was doubling every 53 days 4. In late 2004 Intel announced that they were largely abandoning their push to increase the “clock speed” of individual processing elements, and from henceforth would instead be increasing the number of individual processing units (or cores). While, at least in theory, this drive for increased core counts can deliver the same raw computing capacity, in practice it is much more difficult to write application software that can make use of all of these cores. 5. This is, in essence, the “parallelization problem”, which in many ways is the same no matter whether you are writing software for multiple cores within a single piece of silicon, multiple cores on multiple processors within a single computing system, or multiple cores on multiple processors on multiple computing systems within a single grid/cluster/fabric/cloud.

1.3.4.2 Drive for Cheap

1. How to ensure that commodity infrastructure could be ultimately reliable, easy to operate, easy to bring software onto it. 2. The economies of scale with commodity infrastructure – such as general-purpose processors – are simply overwhelming when compared to specialty designs. 3. The transition to cloud is answering how can we use commodity infrastructure for problems that we care about

1.3.4.3 Google

i. First, the collection of data about the current state of the Internet, and the processing of that data had to be as absolutely automated as possible. ii. In order to save as much money as possible, the infrastructure would be constructed out of commodity components, out of “cheap stuff that breaks”. iii. Data storage needed to be done in a simple, yet fairly reliable manner to facilitate scaling (the Google File System, or GFS – notice the lack of a traditional database, but more on that later). iv. New types of application development architecture(s) would be required, which came to include the so-called map-reduce family (which inspired open source descendants such as Hadoop) among others. v. Operations needed to be as automatic and resilient to failure as possible. vi. Outages in the application were tolerable – after all this was search, and who would miss a few results if an outage occurred?

1.3.4.4 Amazon

i. In the first six or seven years Amazon largely built its computing infrastructure the traditional way, out of big, heavy servers, with traditional relational databases scattered liberally throughout. As commerce on the Internet began to gain some real steam it became abundantly clear that the Amazon computing architecture(s) had to change. ii. Amazon began to adopt many of the same principles as Google had done early on, but then took things a step further. Instead of simply offering entire services – such as search, e-mail, maps, photo, and so forth – with various services exposed for calling from outside, in 2006 Amazon began to offer basic computing resources – computing, storage, network bandwidth – in highly flexible, easily provisioned, services, all of which could be paid for “by the drink”.

1.3.5 Component Evolution
1.3.5.1 Hardware Advances
1.3.5.1.1 Mainframe Computers - 1945
1.3.5.1.2 Micro/GUI/Client Server - 1981
1.3.5.1.3 GRID Computing - 1993
1.3.5.1.4 THin CLient - 1999
1.3.5.1.5 Amazon Elastic Cloud - 2006
1.3.5.2 Network Advances
1.3.5.2.1 Broadband - 1993
1.3.5.2.2 WWW - 1992
1.3.5.2.3 Ethernet - 1973
1.3.5.2.4 ARPANET - 1969
1.3.5.3 Software Advances
1.3.5.3.1 Writely/Google Docs / Zoho - 2005
1.3.5.3.2 REST - 2000
1.3.5.3.3 SAlesforce.com - 1999
1.3.5.3.4 Hypervisors - 1999
1.3.5.3.5 GUI - 1975
1.3.6 Government Cloud Computing
1.3.6.1 Examples
1.3.6.1.1 United States

Federal Chief Information Officers Council Data.gov & IT Dashboard Defense Information Systems Agency (DISA) Rapid Access Computing Environment (RACE) US Department of Energy (DOE) Magellan General Services Administration (GSA) Apps.gov Department of the Interior National Business Center (NBC) Cloud Computing NASA Nebula National Institute of Standards and Technology (NIST)

1.3.6.1.1.1 Data.gov
1.3.6.1.1.2 USASpending.gov
1.3.6.1.1.3 NBC Cloud
1.3.6.1.1.4 Apps.gov
1.3.6.1.2 European Union

Resources and Services Virtualization without Barriers Project (RESERVOIR)

1.3.6.1.3 Canada

Canada Cloud Computing Cloud Computing and the Canadian Environment

1.3.6.1.4 United Kingdom

G-Cloud

1.3.6.1.5 Japan

The Digital Japan Creation Project (ICT Hatoyama Plan) The Kasumigaseki Cloud

1.3.6.2 GovCloud Framework
1.3.6.2.1 Clients
1.3.6.2.1.1 Definitions

i. Definition - A cloud client consists of computer hardware and/or computer software that relies on cloud computing for application delivery, or that is specifically designed for delivery of cloud services and that, in either case, is essentially useless without it.

1.3.6.2.1.2 Types

1. Mobile (Android, iPhone, Windows Mobile) 2. Thin client (CherryPal, Zonbu, gOS-based systems) 3. Thick client / Web browser (Mozilla Firefox, Google Chrome, WebKit)

1.3.6.2.2 Applications
1.3.6.2.2.1 Definition

i. Definition - A cloud application leverages cloud computing in software architecture, often eliminating the need to install and run the application on the customer's own computer, thus alleviating the burden of software maintenance, ongoing operation, and support.

1.3.6.2.2.2 Types

1. Peer-to-peer / volunteer computing (BOINC, Skype) 2. Web applications (Facebook, Twitter, YouTube) 3. Security as a service (MessageLabs, Purewire, ScanSafe, Zscaler) 4. Software as a service (A2Zapps.com, Google Apps, Salesforce) 5. Software plus services (Microsoft Online Services) 6. Storage [Distributed] a. Content distribution (BitTorrent, Amazon CloudFront) b. Synchronization (Dropbox, Live Mesh, SpiderOak)

1.3.6.2.2.3 Interface
1.3.6.2.2.3.1 User
1.3.6.2.2.3.2 Machine
1.3.6.2.3 Platform
1.3.6.2.3.1 Definition

i. Definition - A cloud platform (PaaS) delivers a computing platform and/or solution stack as a service, generally consuming cloud infrastructure and supporting cloud applications. It facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers.

1.3.6.2.3.2 Services

1. Identity (OAuth, OpenID) 2. Payments (Amazon Flexible Payments Service, Google Checkout, PayPal) 3. Search (Alexa, Google Custom Search, Yahoo! BOSS) 4. Real-world (Amazon Mechanical Turk)

1.3.6.2.3.3 Solution Stacks

1. Java (Google App Engine) 2. PHP (Rackspace Cloud Sites) 3. Python Django (Google App Engine) 4. Ruby on Rails (Heroku) 5. .NET (Azure Services Platform, Rackspace Cloud Sites) 6. Proprietary (Force.com, WorkXpress, Wolf Frameworks)

1.3.6.2.3.4 Storage (Structured)

1. Databases (Amazon SimpleDB, BigTable) 2. File storage (Amazon S3, Nirvanix, Rackspace Cloud Files) 3. Queues (Amazon SQS)Components

1.3.6.2.4 Infrastructure
1.3.6.2.4.1 Definition

Cloud infrastructure (IaaS) is the delivery of computer infrastructure, typically a platform virtualization environment, as a service.

1.3.6.2.4.2 Types

1. Compute (Amazon CloudWatch, RightScale) a. Physical machines) b. Virtual machines (Amazon EC2, GoGrid, Rackspace Cloud Servers) c. OS-level virtualization 2. Network (Amazon VPC) 3. Storage [Raw] (Amazon EBS)Compute

1.3.6.2.5 Physical Layer

The physical ayer consists of computer hardware and/or computer software products that are specifically designed for the delivery of cloud services. (i.e. Fabric computing)

1.4 Standards
1.4.1 Taxonomy
1.4.1.1 Across Cloud Services

As cloud computing becomes more common, applications will likely use different types of cloud services. An application might use a cloud storage service, a cloud message queue, and manage (start/stop/monitor) virtual machines running in the cloud. Standards to define how these different services work together should provide value.

1.4.1.2 Within Cloud Services

As cloud computing becomes more common, applications will likely use different types of cloud services. An application might use a cloud storage service, a cloud message queue, and manage (start/stop/monitor) virtual machines running in the cloud. Standards to define how these different services work together should provide value.

1.4.1.3 Between the Cloud and Enterprise

Even as cloud computing emerges, enterprise architectures such as Java EE are not going away. Standards that define how an enterprise application communicates with resources such as a cloud database or a cloud message queue would enable those applications to use cloud services with little or no changes. Figuring out how to integrate cloud computing with existing architectures and development paradigms will be a major challenge for this group.

1.4.1.4 Within an Enterprise

Standards within an enterprise will be determined by requirements such as interoperability, auditability, security and management, and will build upon the standards that apply between enterprises and the cloud. The enterprise will interact with some combination of private, public and hybrid clouds.

1.4.2 Topics for Standards
1.4.2.1 SOA
1.4.2.1.1 WSDL 1.1

The Web Services Description Language (WSDL, pronounced 'wiz-dəl' or spelled out, 'W-S-D-L') is an XML-based language that provides a model for describing Web services.

1.4.2.1.2 SOAP 1.1

SOAP, originally defined as Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation of Web Services in computer networks.

1.4.2.1.3 WS-I Basic Profile 1.0 or 1.1

The WS-I Basic Profile (official abbreviation is BP), a specification from the Web Services Interoperability industry consortium (WS-I), provides interoperability guidance for core Web Services specifications such as SOAP, WSDL, and UDDI. The profile uses Web Services Description Language (WSDL) to enable the description of services as sets of endpoints operating on messages.

1.4.2.1.4 UDDI 3.0.2

Universal Description, Discovery and Integration (UDDI, pronounced Yu-diː) is a platform-independent, Extensible Markup Language (XML)-based registry for businesses worldwide to list themselves on the Internet.

1.4.2.1.5 WS-Security 1.0 or 1.1

WS-Security (Web Services Security, short WSS) is a flexible and feature-rich extension to SOAP to apply security to Web services.

1.4.2.1.6 WS-BPEL 2.0

Business Process Execution Language (BPEL), short for Web Services Business Process Execution Language (WS-BPEL) is an OASIS[1] standard executable language for specifying interactions with Web Services. Processes in Business Process Execution Language export and import information by using Web Service interfaces exclusively.

1.4.2.1.7 BPMN

Business Process Modelling Notation (BPMN) is a graphical representation for specifying business processes in a workflow.

1.4.2.1.8 WSRP 1.0

Web Services for Remote Portlets (WSRP) is an OASIS-approved network protocol standard designed for communications with remote portlets.

1.4.2.1.9 XML Schema 1.0

XML Schema, published as a W3C recommendation in May 2001, is one of several XML schema languages. It was the first separate schema language for XML to achieve Recommendation status by the W3C. Because of confusion between XML Schema as a specific W3C specification, and the use of the same term to describe schema languages in general, some parts of the user community referred to this language as WXS, an initialism for W3C XML Schema, while others referred to it as XSD, an initialism for XML Schema Document—a document written in the XML Schema language, typically containing the "xsd" XML namespace prefix and stored with the ".xsd" filename extension. In the draft of the next version, 1.1, the W3C has chosen to adopt XSD as the preferred name, and that is the name used in this article. Like all XML schema languages, XSD can be used to express a set of rules to which an XML document must conform in order to be considered 'valid' according to that schema. However, unlike most other schema languages, XSD was also designed with the intent that determination of a document's validity would produce a collection of information adhering to specific data types.

1.4.2.1.10 XSLT 1.0,

XSLT (XSL Transformations) is a declarative, XML-based language used for the transformation of XML documents into other XML documents. The original document is not changed; rather, a new document is created based on the content of an existing one.[2] The new document may be serialized (output) by the processor in standard XML syntax or in another format, such as HTML or plain text.[3] XSLT is often used to convert XML data into HTML or XHTML documents for display as a web page: the transformation may happen dynamically either on the client or on the server, or it may be done as part of the publishing process.

1.4.2.1.11 XPath 1.0

XPath, the XML Path Language, is a query language for selecting nodes from an XML document. In addition, XPath may be used to compute values (e.g., strings, numbers, or Boolean values) from the content of an XML document. XPath was defined by the World Wide Web Consortium (W3C).

1.4.2.1.12 XQuery 1.0

XQuery is a query and functional programming language that is designed to query collections of XML data. XQuery 1.0 was developed by the XML Query working group of the W3C. The work was closely coordinated with the development of XSLT 2.0 by the XSL Working Group; the two groups shared responsibility for XPath 2.0, which is a subset of XQuery 1.0. XQuery 1.0 became a W3C Recommendation on January 23, 2007.

1.4.2.1.13 XML Signature

XML Signature (also called XMLDsig, XML-DSig, XML-Sig) is a W3C recommendation that defines an XML syntax for digital signatures, as defined in [1]. Functionally, it has much in common with PKCS#7 but is more extensible and geared towards signing XML documents. It is used by various Web technologies such as SOAP, SAML, and others.

1.4.2.1.14 XML Encryption

XML Encryption, also known as XML-Enc, is a specification, governed by a W3C recommendation, that defines how to encrypt the contents of an XML element.

1.4.2.2 Cloud Computing
1.4.2.2.1 Storage
1.4.2.2.1.1 ThriftStore

For cloud storage interoperability, we created ThriftStore to provide a common interface to multiple cloud storage frameworks. ThriftStore currently supports Sector and Hadoop, but support for other file systems such as KFS or Amazon's S3 could easily be added. Although there is some overhead associated with going through a Thrift service, ThriftStore provides the facility to implement a single client that can be used to access any supported cloud storage system. Additionally, the client can be written in any language supported by Thrift.

1.4.2.2.1.2 PySector

To provided a less general alternative to ThriftStore, we created PySector. This is a Python extension to the Sector C++ client, allowing a developer to use 100% python to run Sector file system commands. It can only be used with Sector, but has very little overhead.

1.4.2.2.1.3 SectorJNI

The Sector C++ client has also been opened up to Java applications by the Sector JNI project. Although there is also some overhead associated with the JNI bridge between Java and the native C++ code, it does allow for pure Java applications to use Sector.

1.4.2.2.2 Compute
1.4.2.2.2.1 Sector File System for Hadoop

The dominant open source compute cloud right now is Hadoop. We created an implementation of Hadoop's File System Interface which allows Sector to be used as the backing store for Hadoop MapReduce applications. This is a real-world use of the Sector JNI that we created for the Storage Interoperability Project. We used the JNI / NIO access that provides access to Sector and coded a client that takes Hadoop File System command and passes them on Sector. This gives developers the option of running Hadoop MapReduce jobs over data stored in Sector.

1.4.2.2.2.2 PySphere

Sphere is middleware that is designed to process data managed by Sector. Sphere implements an application framework to perform parallel processing on data stored in the Sector file system. Sphere supports two models for implementing applications: user defined functions (UDF) or MapReduce. The UDF model allows any user defined functions to be applied to a Sector dataset, while Sphere MapReduce follows the traditional MapReduce model. Sphere applications, regardless of the model, have to be written in C++.

1.4.2.3 Mobile/Handheld Devices

Most significantly, the Open Mobile Alliance is designed to be the center of mobile service enabler specification work, helping the creation of interoperable services across countries, operators and mobile terminals that will meet the needs of the user. To grow the mobile market, the companies supporting the Open Mobile Alliance will work towards stimulating the fast and wide adoption of a variety of new, enhanced mobile information, communication and entertainment services.

1.4.2.4 Virtualization

The DMTF’s technologies are designed to work together to address the industry’s needs and requirements for interoperable distributed management. These standards provide well-defined interfaces that build upon each other, delivering end-to-end management capabilities and interoperability. The interrelationships between the DMTF technologies in this diagram deliver incremental value throughout the stack, building added value with each additional layer that is implemented.

1.4.2.4.1 Web-Based Enterprise Management (WBEM)
1.4.2.4.1.1 Protocols
1.4.2.4.1.2 Infrastructure
1.4.2.4.2 Common Information Model
1.4.2.4.2.1 Schema
1.4.2.4.2.2 Infrastructure
1.4.2.5 API
1.4.2.5.1 Levels
1.4.2.5.1.1 The Wire

At this level, the developer writes directly to the wire format of the request. If the service is REST-based, the developer creates the appropriate HTTP headers, creates the payload for the request, and opens an HTTP connection to the service. The REST service returns data with an accompanying HTTP response code. Because of the straightforward nature of many REST services, it is possible to be relatively efficient while writing code at this level. If the service is SOAP-based, the developer creates the SOAP envelope, adds the appropriate SOAP headers, and fills the body of the SOAP envelope with the data payload. The SOAP service responds with a SOAP envelope that contains the results of the request. Working with SOAP services requires parsing the XML content of the envelopes; for that reason, most SOAP services are invoked with a higher-level API.

1.4.2.5.1.2 Language-specific Toolkits

Developers at this level use a languagespecific toolkit to work with SOAP or REST requests. Although developers are still focused on the format and structure of the data going across the wire, many of the details (handling response codes and calculating signatures, for example) are handled by the toolkit.

1.4.2.5.1.3 Service-specific Toolkit

The developer uses a higher-level toolkit to work with a particular service. Working at this level, the developer is able to focus on business objects and business processes. A developer can be far more productive when focusing on the data and processes that matter to the organization instead of focusing on the wire protocol.

1.4.2.5.1.4 Service-neutral Toolkit

This is the highest level of API. A developer working at this level uses a common interface to multiple cloud computing providers. As with Level 3, the developer focuses on business objects and business processes. Unlike Level 3, a developer working at Level 4 does not have to worry about which cloud service they are working with. An application written with a service-neutral toolkit should be able to use a different cloud vendor (see Changing Cloud Vendors on page 27) with minimal changes to the code, if any.

1.4.2.5.2 Categories
1.4.2.5.2.1 Ordinary PRogramming

The usual application programming interfaces in C#, PHP, Java, etc. There is nothing cloud-specific in this category.

1.4.2.5.2.2 Deployment

Programming interfaces to deploy applications to the cloud. In addition to any cloud-specific packaging technologies, this includes traditional packaging mechanisms such as .Net assemblies and EAR/WAR files.

1.4.2.5.2.3 Cloud Services

Programming interfaces that work with cloud services. As discussed in the previous section, cloud service APIs can be either service-specific or service-neutral. These APIs are divided into subcategories for cloud storage services, cloud databases, cloud message queues, and other cloud services. A developer writing code using cloud services APIs is aware that they are using the cloud.

1.4.2.5.2.4 Image and Infrastructure Management

Programming interfaces to manage virtual machine images and infrastructure details. APIs for images support uploading, deploying starting, stopping, restarting, and deleting images. Infrastructure management APIs control details such as firewalls, node management, network management and load balancing.

1.4.2.5.2.5 Internal Interfaces

Programming interfaces for the internal interfaces between the different parts of a cloud infrastructure. These are the APIs you would use if you wanted to change vendors for the storage layer in your cloud architecture.

1.4.3 SAJACC
1.4.3.1 Step 1
1.4.3.2 Step 2
1.4.4 Security Risk Management - FedRAMP
1.4.4.1 Security
1.4.4.1.1 Regulations

Regulations are not technical issues, but they must be addressed. Laws and regulations will determine security requirements that take priority over functional requirements.

1.4.4.1.2 Security Controls

Although a given consumer might need all of these security controls, consumers should be wary of any cloud provider that makes security-related claims and reassurances without an infrastructure capable of delivering all of them.

1.4.4.1.2.1 Asset Management

It must be possible to manage all of the hardware, network and software assets (physical or virtual) that make up the cloud infrastructure. This includes being able to account for any physical- or network-based access of an asset for audit and compliance purposes.

1.4.4.1.2.2 Cryptography: Key and Certificate Managemnt

Any secure system needs an infrastructure for employing and managing cryptographic keys and certificates. This includes employing standards-based cryptographic functions and services to support information security at rest and in motion.

1.4.4.1.2.3 Data/Storage Security

It must be possible to store data in an encrypted format. In addition, some consumers will need their data to be stored separately from other consumers' data.

1.4.4.1.2.4 Endpoint Security

Consumers must be able to secure the endpoints to their cloud resources. This includes the ability to restrict endpoints by network protocol and device type.

1.4.4.1.2.5 Event Auditing and Reporting

Consumers must be able to access data about events that happen in the cloud, especially system failures and security breaches. Access to events includes the ability to learn about past events and reporting of new events as they occur. Cloud providers cause significant damage to their reputations when they fail to report events in a timely manner.

1.4.4.1.2.6 Identity, Roles, Access Control and Attributes

It must be possible to define the identity, roles, entitlements and any other attributes of individuals and services in a consistent, machine-readable way in order to effectively implement access control and enforce security policy against cloud-based resources.

1.4.4.1.2.7 Network Security

It must be possible to secure network traffic at the switch, router and packet level. The IP stack itself should be secure as well.

1.4.4.1.2.8 Security Policies

It must be possible to define policies, resolve, and enforce security policies in support of access control, resource allocation and any other decisions in a consistent, machinereadable way. The method for defining policies should be robust enough that SLAs and licenses can be enforced automatically.

1.4.4.1.2.9 Service Automation

There must be an automated way to manage and analyze security control flows and processes in support of security compliance audits. This also includes reporting any events that violate any security policies or customer licensing agreements.

1.4.4.1.2.10 Workload and Service Management

It must be possible to configure, deploy and monitor services in accordance with defined security policies and customer licensing agreements.

1.4.4.1.3 Security Federation Patterns

To implement these security controls, several federation patterns are needed. Cloud providers should deliver these patterns through existing security standards.

1.4.4.1.3.1 Trust

The ability for two parties to define a trust relationship with an authentication authority. That authentication authority is capable of exchanging credentials (typically X.509 certificates), and then using those credentials to secure messages and create signed security tokens (typically SAML). Federated trust is the foundation upon which all the other secure federation patterns are based.

1.4.4.1.3.2 Identity Management

The ability to define an identity provider that accepts a user’s credentials (a user ID and password, a certificate, etc.) and returns a signed security token that identifies that user. Service providers that trust the identity provider can use that token to grant appropriate access to the user, even though the service provider has no knowledge of the user.

1.4.4.1.3.3 Access Management

The ability to write policies (typically in XACML) that examine security tokens to manage access to cloud resources. Access to resources can be controlled by more than one factor. For example, access to a resource could be limited to users in a particular role, but only across certain protocols and only at certain times of the day.

1.4.4.1.3.4 Single Sign-on / Sign-Off

The ability to federate logins based on credentials from a trusted authority. Given an authenticated user with a particular role, federated single sign-on allows a user to login to one application and access other applications that trust the same authority. Federated single sign-off is part of this pattern as well; in many situations it will be vital that a user logging out of one application is logged out of all the others. The Single Sign-On pattern is enabled by the Identity Management pattern.

1.4.4.1.3.5 Audit and Compliance

The ability to collect audit and compliance data spread across multiple domains, including hybrid clouds. Federated audits are necessary to ensure and document compliance with SLAs and regulatory requirements.

1.4.4.1.3.6 Configuration Management

The ability to federate configuration data for services, applications and virtual machines. This data can include access policies and licensing information across multiple domains.

2 Day 2 - Cloud Computing Mission Relevance
2.1 Executive Views
2.1.1 Dave Wennergren, Dep. CIO, OSD
2.1.2 Bob Lentz, DoD Chief Security Officer
2.1.3 Thomas Dee, Director Defense Biometrics
2.1.4 Henry Sienkiewicz, DISA Cloud Computing
2.1.5 Mike Krieger, Dep. CIO, US Army
2.1.6 Rob Carey, CIO, US Navy
2.1.7 General Sorenson (Apps for the Army)
2.1.8 Col. Foster
2.1.9 Chris Kemp
2.1.10 Henry Sienkiewicz
2.1.11 Dave Wennergren
2.2 General Trends
2.3 Key Discussion Points
2.3.1 Benefits
2.3.1.1 Significant Cost Reductions
2.3.1.2 Increased Flexibility
2.3.1.3 Access Anywhere
2.3.1.4 Elastic Scalability
2.3.1.5 Easy to implement
2.3.1.6 Service Quality
2.3.1.7 Delegation of non-critical applications
2.3.1.8 Ease of Technology Refresh
2.3.1.9 Ease of Collaboration
2.3.2 Concerns
2.3.2.1 Security
2.3.2.2 Performance
2.3.2.3 Availability
2.3.2.4 Integration difficulty
2.3.2.5 Procurement process
2.3.2.6 Ability to Customize
2.3.2.7 Regulatory requirements
2.3.2.8 Political issues/concerns
2.3.2.9 Legal issues/concerns
2.3.3 Return on Investment
2.3.3.1 Indicator Ratios
2.3.3.1.1 Cloud ROI Cost Indicator Ratios
2.3.3.1.2 Cloud ROI Time Indicator Ratios
2.3.3.1.3 Cloud ROI Quality Indicator Ratios
2.3.3.1.4 Cloud ROI Profitability Indicator Ratios
2.3.3.1.5 Cloud ROI Savings Models
2.3.3.2 Business Metrics
2.3.3.2.1 Speed of Cost Reduction – Cost of Adoption/De-Adoption
2.3.3.2.2 Optimizing Ownership Use
2.3.3.2.3 Rapid Provisioning
2.3.3.2.4 Increase Margin (Make More Money)
2.3.3.2.5 Dynamic Usage – Elastic Provisioning and Service Management
2.3.3.2.6 Risk and Compliance Improvement
2.3.4 Economics
2.3.4.1 Avoid capital expenditures
2.3.4.2 Consumtion billed as a utility
2.3.4.3 Low barriers to entry
2.3.4.4 Shared infrastructure cost
2.3.4.5 Low management overhead
2.3.4.6 Immediate access to broad range of applications
2.3.4.7 Immediate termination option
2.3.4.8 Enforceable Service Level Agreements
2.3.4.9 High Benefit-Cost Ratios
2.3.5 Inhibitors
2.3.5.1 Maintenance of status quo
2.3.5.2 Transition from infrastructure based security to data-centric security
2.3.5.3 Cloud portability
2.3.5.4 Cloud interoperability
2.3.5.5 Identity management and federation
2.3.5.6 Data and application federation
2.3.5.7 Development of appropriate Service Level Agreements
2.3.5.8 Cloud Governance
2.3.5.9 Transaction and concurrency across clouds
2.3.5.10 Technology standards
2.3.6 Other Issues
2.3.6.1 Security and privacy
2.3.6.2 SLA Benchmarks
2.3.6.3 Location awareness
2.3.6.4 Metering & Monitoring
2.3.6.5 Common infrastruct file formats
2.3.6.6 Lifecycle management
2.3.6.7 VM deployment & termination
2.3.6.8 Government/DoD specific standards and protocols
2.4 Operational Architectures
2.4.1 Intelligence Community

a. Intelligence Community - “Develop a common “cloud” based on a single backbone network and clusters of servers in scalable, distributed centers where data is stored, processed and managed

2.4.1.1 NSA

1. In pilots with MapReduce cloud computing proven as an effective method to address enterprise large data problems.

2.4.1.2 NGA

a. Using “cloud computing technology” for imagery processing

2.4.1.3 CIA

1. “…If I look back at CIA’s technology strategy for the past few years, we were headed to an Enterprise Cloud all along…”

2.4.2 DoD
2.4.2.1 DISA
2.4.2.1.1 RACE
2.4.2.1.2 GCDS
2.4.2.1.3 Forge.mil
2.4.2.2 US Navy

i. InRelief.org 1. A US Navy effort, managed by San Diego State University (Registered NGO) , to promote better interactions and results when disasters strike. 2. Built on Google platform 3. Collaborative environment used to promote information sharing between International Organizations, Non-Governmental Organizations, Government Organizations, and Military groups responding to a natural or man-made disaster.

2.4.2.3 US Air Force

10 month effort with IBM i. Prove the concept and feasibility of Dynamic Cyber Defense within Cloud Computing operations ii. Security of cloud connected resources and users operating across uncontrolled and unprotected Wide Area Networks iii. Deployment and dynamic operations of PC based clients, HW/SW servers, applications and services within a cloud environment iv. Possible responses to dynamic or scenario based changes to mission priorities in near real-time, dynamically and securely

2.4.2.4 US Army
2.4.2.4.1 US Army GNEC
2.4.2.5 US TRANSCOM
3 Day 3 - Cloud Planning Exercise
3.1 Cloud Computing Reference Model
3.1.1 Ground Rules
3.1.1.1 Cloud Tiers Enable Higher-Level Tiers.

i. Cloud Tiers Enable Higher-Level Tiers. Each cloud tier, working from the bottom up in the Cloud Computing Reference Model, enables the cloud tier or tiers above it. The tiers build upon one another, but yet are independent and separately-accessible cloud capabilities in and of themselves.

3.1.1.2 Cloud Tiers Are Individually “Atomic” and Individually Accessible.

ii. Cloud Tiers Are Individually “Atomic” and Individually Accessible. Cloud consumer can access and consume cloud-enabled resources directly from any of these tiers, independent of the others, via cloud API and a portal or self-service user interface of some fashion.

3.1.1.3 All Cloud Tiers Need Ecosystem Enablement and Cloud Dial Tone.

iii. All Cloud Tiers Need Ecosystem Enablement and Cloud Dial Tone. Each cloud tier must have the necessary Cloud Network/Dial Tone and Cloud Ecosystem Enablement capabilities in order to be discoverable, provisionable and consumable as a service via the cloud. Furthermore, cloud providers and consumers must be able to find one another, communicate and negotiate, and then engage by establishing business and technical relationships via a service contract, appropriate technical interfaces to cloud capabilities, with clearly defined SLAs and QoS pre-defined and agreed to.

3.1.2 Functional Model
3.1.2.1 Foundation

1. The Cloud Foundation is established by the tools and technologies that enable virtualization of network, computing, storage and security resources, over a highly reliable network and computing infrastructure.

3.1.2.1.1 Physical Tier

Provides the physical computing, storage, network and security resources that are virtualized and cloud enabled to support cloud requirements. The cloud physical tier has nothing to do with cloud, specifically. The physical tier provides the substrate on which cloud virtualization technologies and cloud operating systems platforms build to enable higher order cloud patterns to be realized.

3.1.2.1.1.1 Computing resources.
3.1.2.1.1.2 Storage resources.
3.1.2.1.1.3 Network resources.
3.1.2.1.1.4 Security resources.
3.1.2.1.2 Virtualization Tier

Provides core physical hardware virtualization and the foundation for cloud computing.

3.1.2.1.2.1 Virrtualization Technology
3.1.2.1.2.2 Virtualization Management
3.1.2.2 Enablement

2. The Cloud Enablement category refers to two of the tiers – the Cloud Operating System (OS) Tier and the Cloud Platform Tier. Both of these tiers are cloud enablement tiers that hit the core of cloud – the OS capabilities are essential to create cloud-based capabilities, and the Cloud Platform Tier enables the broad range of platforms, applications, and business capabilities to be provided as a service via the cloud. However, we must be clear: Cloud Enablement applies to all the Cloud Computing Reference Model tiers we have identified. The two middle tiers – OS and Platform – are especially critical to realizing the true potential of cloud.

3.1.2.2.1 Operating System Tier

iii. Cloud Operating System Tier. Provides the cloud computing “fabric,” as well as application virtualization, core cloud provisioning, metering, billing, load balancing, workflow and related functionality typical of cloud platforms.

3.1.2.2.1.1 SOA enablement technology
3.1.2.2.1.2 Billing and metering
3.1.2.2.1.3 Chargeback and financial integration
3.1.2.2.1.4 Load balancing and performance assurance
3.1.2.2.1.5 Monitoring, management, and SLA enforcement
3.1.2.2.1.6 Resource provisioning and management
3.1.2.2.1.7 Onboarding and offboarding automation
3.1.2.2.1.8 Security and privacy tools/controls
3.1.2.2.1.9 Cloud pattern enablement tools (see Logical Cloud Stack)
3.1.2.2.1.10 Cloud workflow, process management, and orchestration tools
3.1.2.2.2 Platform Tier

iv. Cloud Platform Tier. Provides the technical solutions that comprise cloud-platforms, as well as the cloud platforms themselves, offered via PaaS delivery models.

3.1.2.2.2.1 PaaS as pre-assembled, integrated application platforms provided to others (e.g., Google App Engine, Salesforce’s Force.com).
3.1.2.2.2.2 SOA middleware, services and other related SOA enablement middleware and capabilities.
3.1.2.2.2.3 Application container services, application servers, and related application hosting and runtime services.
3.1.2.2.2.4 Web application and content servers, content hosting and delivery, and Web server capabilities.
3.1.2.2.2.5 Messaging, mediation, integration, and related messaging services and middleware, event engines, complex event processing and related event middleware.
3.1.2.2.2.6 Developer resources to support develop onboarding, application development, testing resources, sandbox functionality, and application provisioning, hosting, and the related application metering, billing, and support capabilities.
3.1.2.3 Exploitation

3. Cloud Exploitation category refers to the consumption of cloud-based resources to address specific business, mission, information technology, infrastructure and mission needs. The Cloud Exploitation category really can refer to the ability to exploit all cloud tiers and combinations of cloud patterns and deployment models to address business requirements. The Cloud Exploitation category is the consumer-side of cloud.

3.1.2.3.1 Business Tier (General)

v. Cloud Business Tier. Comprises the business or mission exploitation of cloud-enabled business applications, software, data, content, knowledge and associated analysis frameworks, and other “cloud consumption” models that facilitate and enable end-user business value from cloud consumers being able to access, bind and consume cloud capabilities.

3.1.2.3.1.1 SaaS, including email, business applications, enterprise applications, desktop software, business utilities (email, calendar, synchronization), portal, and so forth.
3.1.2.3.1.2 DaaS/KaaS
3.1.2.3.1.3 Business processes as a service
3.1.2.3.2 Business Tier (Specific)
3.1.2.4 Deployment
3.1.2.4.1 Internal/Private CLoud

1. An internal cloud, or private cloud, is an internal deployment where cloud computing capabilities are planned, architected, acquired, and implemented to support internal business requirements, while avoiding perceived risks around security, privacy, and even the relative immaturity of the cloud industry and technology landscape. A private cloud primarily brings value via proven virtualization technologies, which can extend from the Cloud Virtualization Tier up through the Cloud Platform Tier and even to the Cloud Business Tier. In this manner, internal private clouds can be highly valuable to an enterprise, bringing new capabilities that exceed the well-established hardware virtualization model that we know today. Given this context, internal private clouds that push higher up the logical cloud stack have a compelling value proposition for your enterprise.

3.1.2.4.2 External/Public Cloud

1. An external or public cloud is provided by an external independent entity, typically a cloud service provider. Amazon, Salesforce, Google, and many other cloud service providers represent the external public cloud deployment model. 2. Key attributes of the public cloud deployment pattern are as follows: a. Provided by an independent third-party cloud service provider. b. Accessed via the Web and a self-service user interface. c. Readily available user guides, onboarding APIs and technical support. d. SLAs and service contracts. e. Multiple virtual machine instances available in varying configurations based on your specific requirements, including processor configuration and RAM, operating system, application server and development environments. f. Multiple cloud resources types available; for example, Amazon provides the following cloud-enabled resources to potential consumers: Amazon Simple Storage Service (S3); Amazon Elastic Compute Cloud (EC2); Amazon Simple DB; Amazon CloudFront (Content delivery, similar to Akamai); Amazon Simple Queue Service (SQS); Amazon Elastic Map Reduce.

3.1.2.4.3 Hybrid/Integrated Cloud

1. Hybrid clouds, or integrated clouds, are scenarios where an organization blends its internal private cloud with cloud capabilities provided through public clouds by third-party cloud service provides. Hybrid clouds require cloud integration. Cloud integration and interoperability is an emerging challenge of the cloud industry, and is already spawning industry standards bodies to help address and standardize on frameworks for cloud interfaces and APIs, cloud integrate and interoperability standards, and even tools that enable cross-cloud composition and orchestration of cloud resources in support of emerging business model needs. 2. The following are attributes of hybrid, integrated clouds: a. Blends a combination of internal cloud and external cloud-enabled resources. b. Takes advantage of the cost economics of external third-party clouds, while mitigating some of the risks by maintaining an internal private cloud for critical processes and data. c. Requires integration of external and internally provided capabilities, which must overcome vendor-proprietary APIs and integrate them with your internal interfaces. d. May segment the Cloud Enablement Model tiers into those you will cloud-enable as private clouds (e.g., data and storage), while others may be pushed to third-party external clouds. Risk analysis and security assessments may help determine what cloud enablement tiers and resources within those tiers are best provided as private, public, or hybrid models.

3.1.2.4.4 Community Cloud

1. Community clouds are a deployment pattern suggested by NIST, where semi-private clouds will be formed to meet the needs of a set of related stakeholders or constituents that have common requirements or interests. Communities of Interest (COI) constructs typical of the federal government may be enabled by community clouds to augment their wiki-centric collaboration processes with cloud-enabled capabilities as well. 2. A community cloud may be private for its stakeholders, or may be a hybrid that integrates the respective private clouds of the members, yet enables them to share and collaborate across their clouds by exposing data or resources into the Community Cloud.

3.1.2.5 Governance

1. Cloud governance is an emerging requirement of cloud computing, and encompasses a broad set of business and technical requirements, from the planning and architecture process through the design-time considerations of cloud computing, functional and non-functional requirements analysis, the actual process of onboarding your enterprise onto a cloud (internal, public, or hybrid), and the monitoring and operations requirements of cloud once you are successfully leveraging a cloud. There are significant gaps in the cloud governance domain, as highlighted below:

3.1.2.5.1 Cloud Lifecycle Governance

a. Cloud Lifecycle Governance. Lack of cloud governance process models for the complete cloud lifecycle, including cloud strategy, planning, modeling and architecture, onboarding and offboarding, cloud portability, cloud requirements analysis, and operations and sustainment.

3.1.2.5.2 Cloud Policy Models and Policy Enforcement Frameworks.

b. Cloud Policy Models and Policy Enforcement Frameworks. Immaturity of cloud policies, policy enforcement models and frameworks to support runtime operations, policy enforcement for quality of service, SLAs, security, and more.

3.1.2.5.3 Cloud Management and Monitoring Tools.

c. Cloud Management and Monitoring Tools. Absence of cloud monitoring and management tools is being offset by SOA vendors and cloud technology provider offering their own management and monitoring solutions. However, there must be more work to develop comprehensive management and monitoring solutions covering the range of cloud deployment models and integration scenarios.

3.1.2.5.4 Cloud Operations and Support Models
3.1.2.5.5 Cloud Application Lifecycle

3. Cloud-based Application Lifecycle Governance. There is a major industry gap in application lifecycle governance based on developing applications on cloud-centric platforms, such as Force.com, Google App Engine, and others, as well as architecting software applications specifically for cloud-based deployments models.

3.1.2.5.6 Application Migration

4. Legacy Application Migration to the Cloud. There are few standards and application migration models to support migrating legacy applications into cloud deployments. Such application migration efforts are immature at best, and many cloud solutions today are oriented toward more contemporary design concepts and approaches (e.g., object orientation, service-enablement and SOA, and of course Web 2.0 concepts of mash-ups, social computing frameworks, and collaboration). There is a lot of work necessary to close these application development, application migration and application refactoring gaps.

3.1.2.5.7 Distributed governance and monitoring infrastructure
3.1.2.5.8 Governance platform that span private, public and hybrid Clouds to provide a single operational picture of operations.
3.1.2.5.9 Cloud onboarding, offboarding, and portability.
3.1.2.5.10 Cloud design-time and run-time considerations.
3.1.2.5.11 Cloud quality assurance and testing.
3.1.2.6 Operations

2. Cloud Operations and Support Models. There is clear immaturity of cloud operations and support models for deployments that involve more than traditional data center operations and hardware virtualization concepts.

3.1.2.6.1 Culture & Behavior

5. Culture and Behavior. A critical aspect of cloud computing is the behavior and cultural dimensions of cloud that will facilitate adoption within an enterprise, and enable the full potential of cloud to be realized by a given organization. We urge you to conduct an explicit examination of your cultural barriers and enablers, and understand the behavioral model necessary to move to cloud computing. This can be especially challenging with an organization is attempting to establish its central IT organization and an internal cloud service provider. Cultural and behavioral factors will create the environment for cloud success, or they will be the reasons for its failure. The technology will not be the reason for cloud failures.

3.1.2.6.2 Funding Models & Incentives

6. Funding Models and Incentives. Corresponding to the cultural and behavioral factors will be the funding models and incentive models necessary to support an enterprise cloud deployment. Funding models and incentives will support the desired behavioral transformation required for cloud success. Establishing the funding, budgeting, chargeback and other financial mechanisms of your cloud strategy is a critical need. Creating incentives for cloud consumption from an internal cloud service provider will also be necessary. Together, funding, incentives, budgeting practices, behavior and culture will establish the ecosystem and environment for cloud success.

3.1.2.6.3 Security & Privacy

ii. Security and Privacy - Cloud computing is certainly under scrutiny for its ability to securely manage data and information, without compromising data security requirements, privacy concerns, and data integrity challenges that accompany cloud deployments. Cloud security and privacy concerns will fuel more internal cloud deployments, initially, until the trust of cloud-based security can be established. Many business leaders, counter intuitively, feel their data is more secure in an external professionally-managed data center than in their own. This discovery demonstrates that perhaps the security and privacy concerns over cloud will be overcome more easily than those of SOA and Web services.

3.1.2.6.4 Management & Monitoring

iii. Management and Monitoring. As described above, cloud management and monitoring requirements must be clearly articulated and understood based on the Cloud Enablement Model and Cloud Deployment Model you have developed during your cloud planning and architecture process. You must consider the instrumentation and tooling necessary to monitor and manage your cloud, whether your deployment is an internal private cloud, or whether you are leveraging third-party external clouds from Amazon or Salesforce.com. Either way, you must be able to integrate and automate the monitoring, performance management, alarming and alerting of cloud events and performance metrics in order to respond to outages, performance degradations, and related operational concerns.

3.1.2.6.5 Support

iv. Operations and Support. Cloud operations and support requirements are also essential to plan for in your cloud planning framework. Thus, they are also an explicit consideration of the Cloud Governance and Operations sub-model. As mentioned above, the entire Cloud Lifecycle Model is poorly understood, and particularly for cloud operations and support. While many of these processes can be adapted from current Information Technology Infrastructure Library (ITIL), Control Objectives for Information and related Technology (COBIT) and other IT management frameworks, you will also need to leverage your current data center and IT support, help desk and operations processes and adapt them based on your chosen cloud deployment model. Operations and support for hybrid and public clouds will be a fast-moving area of emphasis, and you must spend appropriate time understanding the operations and support requirements based on the cloud deployment, based on which cloud enablement tiers and cloud patterns you intend to exploit.

3.1.2.7 Cloud Ecosystem
3.1.2.7.1 Cloud Ecosystem Enablement
3.1.2.7.2 Cloud Network/Cloud Dial Tone
3.1.2.7.3 Cloud Consumers and Cloud Providers
3.1.2.7.4 Cloud Physical Access, Integration, and Distribution
3.2 h
3.2.1 General Use Cases
3.2.1.1 End User to Cloud

Applications running on the cloud and accessed by end users

3.2.1.2 Enterprise to CLoud to End User

Applications running in the public cloud and accessed by employees and customers

3.2.1.3 Enterprise to Cloud

Cloud applications integrated with internal IT capabilities

3.2.1.3.1 Cloudbursting
3.2.1.4 Enterprise to Cloud to Enterprise

Cloud applications running in the public cloud and interoperating with partner applications (supply chain)

3.2.1.5 Private Cloud

A cloud hosted by an organization inside that organization’s firewall.

3.2.1.6 Community Cloud

Subset of Hybrid CLoud. Intranet instead of internet

3.2.1.7 Changing Cloud Vendors

An organization using cloud services decides to switch cloud providers or work with additional providers.

3.2.1.8 Hybrid Cloud

Multiple clouds work together, coordinated by a cloud broker that federates data, applications, user identity, security and other details.

3.2.2 Functional Use Cases (NIST)
3.2.2.1 File/Object System Like
3.2.2.2 Job Control & Programming
3.2.2.3 Cloud 2 Cloud
3.2.2.3.1 Topic
3.2.2.3.2 Topic
3.2.2.3.3 Topic
3.2.2.3.4 Topic
3.2.2.4 Administation
3.2.2.4.1 Topic
3.2.2.4.2 Topic
3.2.2.5 Data Management
3.2.3 Tactical/Deployable Use Cases
3.2.3.1 Cloudbursting
3.2.3.2 Joint/allied/interagency cloud-based collaboration
3.2.3.3 Virtual Infrastructure Binding (shipboard, land vehicle)
3.2.3.4 Compute/Storage provisioning of Robotic Forces (i.e. UAV)
3.2.3.5 Fleet Software Maintenance
3.2.3.6 Fleet IT Casualty Response
3.2.3.7 Exercise/Contingency Planning & Response
3.3 Requirements
3.3.1 Operational Requirements
3.3.1.1 End User to Cloud
3.3.1.1.1 Identity

The cloud service must authenticate the end user.

3.3.1.1.2 Open Client

Access to the cloud service should not require a particular platform or technology.

3.3.1.1.3 Security
3.3.1.1.4 SLA

Although service level agreements for end users will usually be much simpler than those for enterprises, cloud vendors must be clear about what guarantees of service they provide.

3.3.1.2 Enterprise to Cloud to End User
3.3.1.2.1 Indentity

The cloud service must authenticate the end user.

3.3.1.2.2 Open CLient

Access to the cloud service should not require a particular platform or technology.

3.3.1.2.3 Federated Identity

In addition to basic the identity needed by an end user, an enterprise user is likely to have an identity with the enterprise. The ideal is that the enterprise user manages a single ID, with an infrastructure federating other identities that might be required by cloud services.

3.3.1.2.4 Location Awareness

Depending on the kind of data the enterprise is managing on the user's behalf, there might be legal restrictions on the location of the physical server where the data is stored. Although this violates the cloud computing ideal that the user should not have to know details of the physical infrastructure, this requirement is essential. Many applications cannot be moved to the cloud until cloud vendors provide an API for determining the location of the physical hardware that delivers the cloud service.

3.3.1.2.5 Metering and Monitoring

All cloud services must be metered and monitored for cost control, chargebacks and provisioning.

3.3.1.2.6 Management and Governance

Public cloud providers make it very easy to open an account and begin using cloud services; that ease of use creates the risk that individuals in an enterprise will use cloud services on their own initiative. Management of VMs and of cloud services such as storage, databases and message queues is needed to track what services are used. Governance is crucial to ensure that policies and government regulations are followed wherever cloud computing is used. Other governance requirements will be industry- and geography-specific.

3.3.1.2.7 Security

Any use case involving an enterprise will have more sophisticated security requirements than one involving a single end user. Similarly, the more advanced enterprise use cases to follow will have equally more advanced security requirements.

3.3.1.2.8 Common File Format for VMs

A VM created for one cloud vendor’s platform should be portable to another vendor’s platform. Any solution to this requirement must account for differences in the ways cloud vendors attach storage to virtual machines.

3.3.1.2.9 Common APIs for Cloud Storage and Middleware

The enterprise use cases require common APIs for access to cloud storage services, cloud databases, and other cloud middleware services such as message queues. Writing custom code that works only for a particular vendor’s cloud service locks the enterprise into that vendor’s system and eliminates some of the financial benefits and flexibility that cloud computing provides.

3.3.1.2.10 Data and Application Federation

Enterprise applications need to combine data from multiple cloud-based sources, and they need to coordinate the activities of applications running in different clouds.

3.3.1.2.11 SLAs and Benchmarks

In addition to the basic SLAs required by end users, enterprises who sign contracts based on SLAs will need a standard way of benchmarking performance. There must be an unambiguous way of defining what a cloud provider will deliver, and there must be an unambiguous way of measuring what was actually delivered.

3.3.1.2.12 Lifecycle Management

Enterprises must be able to manage the lifecycle of applications and documents. This requirement includes versioning of applications and the retention and destruction of data. Discovery is a major issue for many organizations. There are substantial legal liabilities if certain data is no longer available. In addition to data retention, in some cases an enterprise will want to make sure data is destroyed at some point.

3.3.1.3 Enterprise to Cloud
3.3.1.3.1 Federated Identity

In addition to basic the identity needed by an end user, an enterprise user is likely to have an identity with the enterprise. The ideal is that the enterprise user manages a single ID, with an infrastructure federating other identities that might be required by cloud services.

3.3.1.3.2 Open CLient

Access to the cloud service should not require a particular platform or technology.

3.3.1.3.3 Location Awareness

Depending on the kind of data the enterprise is managing on the user's behalf, there might be legal restrictions on the location of the physical server where the data is stored. Although this violates the cloud computing ideal that the user should not have to know details of the physical infrastructure, this requirement is essential. Many applications cannot be moved to the cloud until cloud vendors provide an API for determining the location of the physical hardware that delivers the cloud service.

3.3.1.3.4 Indentity

The cloud service must authenticate the end user.

3.3.1.3.5 Metering and Monitoring

All cloud services must be metered and monitored for cost control, chargebacks and provisioning.

3.3.1.3.6 Management and Governance

Public cloud providers make it very easy to open an account and begin using cloud services; that ease of use creates the risk that individuals in an enterprise will use cloud services on their own initiative. Management of VMs and of cloud services such as storage, databases and message queues is needed to track what services are used. Governance is crucial to ensure that policies and government regulations are followed wherever cloud computing is used. Other governance requirements will be industry- and geography-specific.

3.3.1.3.7 Security

Any use case involving an enterprise will have more sophisticated security requirements than one involving a single end user. Similarly, the more advanced enterprise use cases to follow will have equally more advanced security requirements.

3.3.1.3.8 Common File Format for VMs

A VM created for one cloud vendor’s platform should be portable to another vendor’s platform. Any solution to this requirement must account for differences in the ways cloud vendors attach storage to virtual machines.

3.3.1.3.9 Common APIs for Cloud Storage and Middleware

The enterprise use cases require common APIs for access to cloud storage services, cloud databases, and other cloud middleware services such as message queues. Writing custom code that works only for a particular vendor’s cloud service locks the enterprise into that vendor’s system and eliminates some of the financial benefits and flexibility that cloud computing provides.

3.3.1.3.10 Data and Application Federation

Enterprise applications need to combine data from multiple cloud-based sources, and they need to coordinate the activities of applications running in different clouds.

3.3.1.3.11 SLAs and Benchmarks

In addition to the basic SLAs required by end users, enterprises who sign contracts based on SLAs will need a standard way of benchmarking performance. There must be an unambiguous way of defining what a cloud provider will deliver, and there must be an unambiguous way of measuring what was actually delivered.

3.3.1.3.12 Lifecycle Management

Enterprises must be able to manage the lifecycle of applications and documents. This requirement includes versioning of applications and the retention and destruction of data. Discovery is a major issue for many organizations. There are substantial legal liabilities if certain data is no longer available. In addition to data retention, in some cases an enterprise will want to make sure data is destroyed at some point.

3.3.1.3.13 Deployment

It should be simple to build a VM image and deploy it to the cloud as necessary. When that VM image is built, it should be possible to move that image from one cloud provider to another, compensating for the different mechanisms vendors have for attaching storage to VMs. Deployment of applications to the cloud should be straightforward as well.

3.3.1.3.14 Industry-specific standards and protocols

Many cloud computing solutions between enterprises will use existing standards such as RosettaNet or OAGIS. The applicable standards will vary from one application to the next and from one industry to the next.

3.3.1.4 Enterprise to Cloud to Enterprise
3.3.1.4.1 Federated Identity

In addition to basic the identity needed by an end user, an enterprise user is likely to have an identity with the enterprise. The ideal is that the enterprise user manages a single ID, with an infrastructure federating other identities that might be required by cloud services.

3.3.1.4.2 Open CLient

Access to the cloud service should not require a particular platform or technology.

3.3.1.4.3 Location Awareness

Depending on the kind of data the enterprise is managing on the user's behalf, there might be legal restrictions on the location of the physical server where the data is stored. Although this violates the cloud computing ideal that the user should not have to know details of the physical infrastructure, this requirement is essential. Many applications cannot be moved to the cloud until cloud vendors provide an API for determining the location of the physical hardware that delivers the cloud service.

3.3.1.4.4 Indentity

The cloud service must authenticate the end user.

3.3.1.4.5 Metering and Monitoring

All cloud services must be metered and monitored for cost control, chargebacks and provisioning.

3.3.1.4.6 Management and Governance

Public cloud providers make it very easy to open an account and begin using cloud services; that ease of use creates the risk that individuals in an enterprise will use cloud services on their own initiative. Management of VMs and of cloud services such as storage, databases and message queues is needed to track what services are used. Governance is crucial to ensure that policies and government regulations are followed wherever cloud computing is used. Other governance requirements will be industry- and geography-specific.

3.3.1.4.7 Security

Any use case involving an enterprise will have more sophisticated security requirements than one involving a single end user. Similarly, the more advanced enterprise use cases to follow will have equally more advanced security requirements.

3.3.1.4.8 Common File Format for VMs

A VM created for one cloud vendor’s platform should be portable to another vendor’s platform. Any solution to this requirement must account for differences in the ways cloud vendors attach storage to virtual machines.

3.3.1.4.9 Common APIs for Cloud Storage and Middleware

The enterprise use cases require common APIs for access to cloud storage services, cloud databases, and other cloud middleware services such as message queues. Writing custom code that works only for a particular vendor’s cloud service locks the enterprise into that vendor’s system and eliminates some of the financial benefits and flexibility that cloud computing provides.

3.3.1.4.10 Data and Application Federation

Enterprise applications need to combine data from multiple cloud-based sources, and they need to coordinate the activities of applications running in different clouds.

3.3.1.4.11 SLAs and Benchmarks

In addition to the basic SLAs required by end users, enterprises who sign contracts based on SLAs will need a standard way of benchmarking performance. There must be an unambiguous way of defining what a cloud provider will deliver, and there must be an unambiguous way of measuring what was actually delivered.

3.3.1.4.12 Lifecycle Management

Enterprises must be able to manage the lifecycle of applications and documents. This requirement includes versioning of applications and the retention and destruction of data. Discovery is a major issue for many organizations. There are substantial legal liabilities if certain data is no longer available. In addition to data retention, in some cases an enterprise will want to make sure data is destroyed at some point.

3.3.1.4.13 Deployment

It should be simple to build a VM image and deploy it to the cloud as necessary. When that VM image is built, it should be possible to move that image from one cloud provider to another, compensating for the different mechanisms vendors have for attaching storage to VMs. Deployment of applications to the cloud should be straightforward as well.

3.3.1.4.14 Industry-specific standards and protocols

Many cloud computing solutions between enterprises will use existing standards such as RosettaNet or OAGIS. The applicable standards will vary from one application to the next and from one industry to the next.

3.3.1.4.15 Transaction Concurrency

For applications and data shared by different enterprises, transactions and concurrency are vital. If two enterprises are using the same cloud-hosted application, VM, middleware or storage, it's important that any changes made by either enterprise are done reliably.

3.3.1.4.16 Interoperability

Because more than one enterprise is involved, interoperability between the enterprises is essential.

3.3.1.5 Private Cloud
3.3.1.5.1 Open Client
3.3.1.5.2 Metering & Monitoring
3.3.1.5.3 Management & Governance
3.3.1.5.4 Security
3.3.1.5.5 Deployment
3.3.1.5.6 Interoperability
3.3.1.5.7 Common Vm Format
3.3.1.5.8 SLAs
3.3.1.6 Changing Cloud Vendors
3.3.1.6.1 Open Client
3.3.1.6.2 Location Awareness
3.3.1.6.3 Security
3.3.1.6.4 SLAs
3.3.1.6.5 Common VM file format
3.3.1.6.6 Common CLoud Storage API
3.3.1.6.7 Common Cloud Middleware API

This includes all of the operations supported by today’s cloud services, including cloud databases, cloud message queues and other middleware. APIs for connecting to, creating and dropping databases and tables. Cloud database vendors have enforced certain restrictions to make their products more elastic and to limit the possibility of queries against large data sets taking significant resources to process. For example, some cloud databases don't allow joins across tables, and some don't support a true database schema. Those restrictions are a major challenge to moving between cloud database vendors, especially for applications built on a true relational model. Other middleware services such as message queues are more similar, so finding common ground among them should be simpler.

3.3.1.6.8 SaaS Vendor
3.3.1.6.8.1 Industry-specific standards
3.3.1.6.9 Changing Middleware VEndors
3.3.1.6.9.1 Industry-specific standards
3.3.1.6.9.2 Common Cloud Middleware APIs
3.3.1.6.10 Changing Cloud Storage VEndors
3.3.1.6.10.1 Common CLoud Storage API
3.3.1.6.11 Changing VM host
3.3.1.6.11.1 Common VM Format
3.3.1.7 Hybrid Cloud
3.3.1.7.1 Federated Identity

In addition to basic the identity needed by an end user, an enterprise user is likely to have an identity with the enterprise. The ideal is that the enterprise user manages a single ID, with an infrastructure federating other identities that might be required by cloud services.

3.3.1.7.2 Open CLient

Access to the cloud service should not require a particular platform or technology.

3.3.1.7.3 Location Awareness

Depending on the kind of data the enterprise is managing on the user's behalf, there might be legal restrictions on the location of the physical server where the data is stored. Although this violates the cloud computing ideal that the user should not have to know details of the physical infrastructure, this requirement is essential. Many applications cannot be moved to the cloud until cloud vendors provide an API for determining the location of the physical hardware that delivers the cloud service.

3.3.1.7.4 Indentity

The cloud service must authenticate the end user.

3.3.1.7.5 Metering and Monitoring

All cloud services must be metered and monitored for cost control, chargebacks and provisioning.

3.3.1.7.6 Management and Governance

Public cloud providers make it very easy to open an account and begin using cloud services; that ease of use creates the risk that individuals in an enterprise will use cloud services on their own initiative. Management of VMs and of cloud services such as storage, databases and message queues is needed to track what services are used. Governance is crucial to ensure that policies and government regulations are followed wherever cloud computing is used. Other governance requirements will be industry- and geography-specific.

3.3.1.7.7 Security

Any use case involving an enterprise will have more sophisticated security requirements than one involving a single end user. Similarly, the more advanced enterprise use cases to follow will have equally more advanced security requirements.

3.3.1.7.8 Common File Format for VMs

A VM created for one cloud vendor’s platform should be portable to another vendor’s platform. Any solution to this requirement must account for differences in the ways cloud vendors attach storage to virtual machines.

3.3.1.7.9 Common APIs for Cloud Storage and Middleware

The enterprise use cases require common APIs for access to cloud storage services, cloud databases, and other cloud middleware services such as message queues. Writing custom code that works only for a particular vendor’s cloud service locks the enterprise into that vendor’s system and eliminates some of the financial benefits and flexibility that cloud computing provides.

3.3.1.7.10 Data and Application Federation

Enterprise applications need to combine data from multiple cloud-based sources, and they need to coordinate the activities of applications running in different clouds.

3.3.1.7.11 SLAs and Benchmarks

In addition to the basic SLAs required by end users, enterprises who sign contracts based on SLAs will need a standard way of benchmarking performance. There must be an unambiguous way of defining what a cloud provider will deliver, and there must be an unambiguous way of measuring what was actually delivered.

3.3.1.7.12 Lifecycle Management

Enterprises must be able to manage the lifecycle of applications and documents. This requirement includes versioning of applications and the retention and destruction of data. Discovery is a major issue for many organizations. There are substantial legal liabilities if certain data is no longer available. In addition to data retention, in some cases an enterprise will want to make sure data is destroyed at some point.

3.3.1.7.13 Deployment

It should be simple to build a VM image and deploy it to the cloud as necessary. When that VM image is built, it should be possible to move that image from one cloud provider to another, compensating for the different mechanisms vendors have for attaching storage to VMs. Deployment of applications to the cloud should be straightforward as well.

3.3.1.7.14 Industry-specific standards and protocols

Many cloud computing solutions between enterprises will use existing standards such as RosettaNet or OAGIS. The applicable standards will vary from one application to the next and from one industry to the next.

3.3.1.7.15 Interoperability

Because more than one enterprise is involved, interoperability between the enterprises is essential.

3.3.2 Security Requirements
3.3.2.1 Regulations

Regulations are not technical issues, but they must be addressed. Laws and regulations will determine security requirements that take priority over functional requirements.

3.3.2.2 Security Controls

Although a given consumer might need all of these security controls, consumers should be wary of any cloud provider that makes security-related claims and reassurances without an infrastructure capable of delivering all of them.

3.3.2.2.1 Asset Management

It must be possible to manage all of the hardware, network and software assets (physical or virtual) that make up the cloud infrastructure. This includes being able to account for any physical- or network-based access of an asset for audit and compliance purposes.

3.3.2.2.2 Cryptography: Key and Certificate Managemnt

Any secure system needs an infrastructure for employing and managing cryptographic keys and certificates. This includes employing standards-based cryptographic functions and services to support information security at rest and in motion.

3.3.2.2.3 Data/Storage Security

It must be possible to store data in an encrypted format. In addition, some consumers will need their data to be stored separately from other consumers' data.

3.3.2.2.4 Endpoint Security

Consumers must be able to secure the endpoints to their cloud resources. This includes the ability to restrict endpoints by network protocol and device type.

3.3.2.2.5 Event Auditing and Reporting

Consumers must be able to access data about events that happen in the cloud, especially system failures and security breaches. Access to events includes the ability to learn about past events and reporting of new events as they occur. Cloud providers cause significant damage to their reputations when they fail to report events in a timely manner.

3.3.2.2.6 Identity, Roles, Access Control and Attributes

It must be possible to define the identity, roles, entitlements and any other attributes of individuals and services in a consistent, machine-readable way in order to effectively implement access control and enforce security policy against cloud-based resources.

3.3.2.2.7 Network Security

It must be possible to secure network traffic at the switch, router and packet level. The IP stack itself should be secure as well.

3.3.2.2.8 Security Policies

It must be possible to define policies, resolve, and enforce security policies in support of access control, resource allocation and any other decisions in a consistent, machinereadable way. The method for defining policies should be robust enough that SLAs and licenses can be enforced automatically.

3.3.2.2.9 Service Automation

There must be an automated way to manage and analyze security control flows and processes in support of security compliance audits. This also includes reporting any events that violate any security policies or customer licensing agreements.

3.3.2.2.10 Workload and Service Management

It must be possible to configure, deploy and monitor services in accordance with defined security policies and customer licensing agreements.

3.3.2.3 Security Federation Patterns

To implement these security controls, several federation patterns are needed. Cloud providers should deliver these patterns through existing security standards.

3.3.2.3.1 Trust

The ability for two parties to define a trust relationship with an authentication authority. That authentication authority is capable of exchanging credentials (typically X.509 certificates), and then using those credentials to secure messages and create signed security tokens (typically SAML). Federated trust is the foundation upon which all the other secure federation patterns are based.

3.3.2.3.2 Identity Management

The ability to define an identity provider that accepts a user’s credentials (a user ID and password, a certificate, etc.) and returns a signed security token that identifies that user. Service providers that trust the identity provider can use that token to grant appropriate access to the user, even though the service provider has no knowledge of the user.

3.3.2.3.3 Access Management

The ability to write policies (typically in XACML) that examine security tokens to manage access to cloud resources. Access to resources can be controlled by more than one factor. For example, access to a resource could be limited to users in a particular role, but only across certain protocols and only at certain times of the day.

3.3.2.3.4 Single Sign-on / Sign-Off

The ability to federate logins based on credentials from a trusted authority. Given an authenticated user with a particular role, federated single sign-on allows a user to login to one application and access other applications that trust the same authority. Federated single sign-off is part of this pattern as well; in many situations it will be vital that a user logging out of one application is logged out of all the others. The Single Sign-On pattern is enabled by the Identity Management pattern.

3.3.2.3.5 Audit and Compliance

The ability to collect audit and compliance data spread across multiple domains, including hybrid clouds. Federated audits are necessary to ensure and document compliance with SLAs and regulatory requirements.

3.3.2.3.6 Configuration Management

The ability to federate configuration data for services, applications and virtual machines. This data can include access policies and licensing information across multiple domains.

3.3.3 Developer Requirements
3.3.3.1 Caching

A programming toolkit that supports caching for cloud resources would boost the performance of cloud applications. Developers need an API to flush the cache, and might need an API to specifically put an object or resource into the cache.

3.3.3.2 Centralized Logging

Logging is a common requirement for all developers, regardless of their role or the type of application they are writing. APIs should support writing logs, examining entries, creating logs and opening and closing log files.

3.3.3.3 Database

Developers need a way to access cloud databases. Cloud databases vary widely in their design (some are schema-driven and relational, many are neither), so developers writing cloud applications will choose a cloud database provider based on the needs of each application. The database API must support the basic CRUD operations.

3.3.3.4 Identity Management

Developers need a way to manage identities. In the simplest case, this requires a way to authenticate a given user's credentials. In cases where an application works with multiple data sources and applications, a developer needs a way to federate that user's identities. A federated identity management system can produce credentials to allow a given user to access a particular service even if the service and user have no knowledge of each other. APIs for identity management should cache and delete credentials as appropriate.

3.3.3.5 Messaging-Point-to-Point

Developers need an API to post messages to a queue and to consume those messages. An API must also allow the developer to peek the message (examine the message’s contents without consuming it).

3.3.3.6 Messaging-Pub-Sub

Developers need an API to work with topics in a message queuing system. The API should allow developers to post messages to a topic and retrieve messages from a topic.

3.3.3.7 Raw Compute / Job Processing

Developers need an API to work with large processing jobs such as Hadoop-style data mining. The API should allow developers to start, stop, monitor and pause processing jobs.

3.3.3.8 Session Management

The ability to manage user sessions is crucial, particularly in a cloud environment. The infrastructure of a cloud is redundant and resilient in the face of machine failures, so sessions must be maintained even when a particular cloud node goes down. The session API must make it easy to access or manipulate the current state of the user session.

3.3.3.9 Service Discovery

Developers need a way to discover which cloud services are available. Cloud services should be searchable by type of service, with the API providing additional function appropriate to each service type.

3.3.3.10 SLAs

Developers using service discovery need an automated way to determine the policies of the services their code discovers. With such an API, developers can write applications that interrogate cloud services and select the one that best meets the application’s SLA criteria.

3.3.3.11 Storage

Developers need a way to access cloud storage services. The API must provide the ability to store and retrieve both data and metadata.

3.3.4 Tactical/Deployable Cloud
3.3.4.1 Limited/Intermittent Connectivity
3.3.4.2 Network Connection Authentication
3.3.4.3 Redundant Compute/Storage Processes
3.3.4.4 Autonomic Capabilities
3.4 Mission Support Analysis (SCOPE)
3.4.1 Evaluation Dimensions
3.4.1.1 Capability/Domain Dependent Scope
3.4.1.1.1 Interoperability Dimension (Cloud Ecosystem)
3.4.1.1.1.1 Business/Mission Tier
3.4.1.1.1.1.1 Mission Service Resources
3.4.1.1.1.1.1.1 Portability
3.4.1.1.1.1.1.2 Interoperability
3.4.1.1.1.1.1.3 Responsiveness
3.4.1.1.1.1.1.4 SLA Compatibility
3.4.1.1.1.1.2 Data Resources
3.4.1.1.1.1.2.1 Portability
3.4.1.1.1.1.2.2 Interoperability
3.4.1.1.1.1.2.3 SLA Compatibility
3.4.1.1.1.1.2.4 Responsiveness
3.4.1.1.1.2 Platform Tier
3.4.1.1.1.2.1 Portability
3.4.1.1.1.2.2 Interoperability
3.4.1.1.1.2.3 SLA Compatibility
3.4.1.1.1.2.4 Responsiveness
3.4.1.1.1.3 Operating System Tier
3.4.1.1.1.3.1 Portability
3.4.1.1.1.3.2 Interoperability
3.4.1.1.1.3.3 SLA Compatibility
3.4.1.1.1.3.4 Responsiveness
3.4.1.1.1.4 Virtualization Tier
3.4.1.1.1.4.1 Network Services
3.4.1.1.1.4.1.1 Portability
3.4.1.1.1.4.1.2 Interoperability
3.4.1.1.1.4.1.3 SLA Compatibility
3.4.1.1.1.4.1.4 Responsiveness
3.4.1.1.1.4.2 Storage Services
3.4.1.1.1.4.2.1 Portability
3.4.1.1.1.4.2.2 Interoperability
3.4.1.1.1.4.2.3 SLA Compatibility
3.4.1.1.1.4.2.4 Responsiveness
3.4.1.1.1.4.3 Compute Platform Resources
3.4.1.1.1.4.3.1 Portability
3.4.1.1.1.4.3.2 Interoperability
3.4.1.1.1.4.3.3 SLA Compatibility
3.4.1.1.1.4.3.4 Responsiveness
3.4.1.1.1.5 Physical Tier
3.4.1.1.1.5.1 Portability
3.4.1.1.1.5.2 Interoperability
3.4.1.1.1.5.3 SLA Compatibility
3.4.1.1.1.5.4 Responsiveness
3.4.1.1.1.6 Stadardization
3.4.1.1.1.6.1 Client
3.4.1.1.1.6.2 Software (SaaS)
3.4.1.1.1.6.2.1 Operating Environment
3.4.1.1.1.6.2.1.1 HTML 5
3.4.1.1.1.6.2.2 Event-driven scripting language
3.4.1.1.1.6.2.2.1 ECMAScript
3.4.1.1.1.6.2.3 Data-interchange format
3.4.1.1.1.6.2.3.1 JSON (RFC 4627)
3.4.1.1.1.6.3 Platform (PaaS)
3.4.1.1.1.6.3.1 Management API
3.4.1.1.1.6.4 Infrastructure (IaaS0
3.4.1.1.1.6.4.1 Management API
3.4.1.1.1.6.4.1.1 Cloud Infrastructure API (CIA)
3.4.1.1.1.6.4.2 System Virtualization, Partitioning and CLustering
3.4.1.1.1.6.4.2.1 System Virtualization, Partitioning and Clustering (Draft)
3.4.1.1.1.6.4.3 Container format for virtual machines
3.4.1.1.1.6.4.3.1 Open Virtualization Format (OVF)
3.4.1.1.1.6.4.4 Descriptive language for resources
3.4.1.1.1.6.4.4.1 CIM
3.4.1.1.1.6.5 Fabric
3.4.1.1.2 Operational Dimensions
3.4.1.1.2.1 Governance & Mgmt Dimensions
3.4.1.1.2.1.1 Operational Responsibility
3.4.1.1.2.1.1.1 Relationship Management
3.4.1.1.2.1.1.1.1 Measurement Currency
3.4.1.1.2.1.1.1.1.1 Money
3.4.1.1.2.1.1.1.1.2 Number of contacts
3.4.1.1.2.1.1.1.1.3 Number of interactions
3.4.1.1.2.1.1.1.1.4 Public visibility
3.4.1.1.2.1.1.2 Consequence Management
3.4.1.1.2.1.1.3 Mission Specificity Requirement
3.4.1.1.2.1.1.4 SLA Management
3.4.1.1.2.1.1.5 QOS Management
3.4.1.1.2.1.1.5.1 Privacy/security/anonymity levels
3.4.1.1.2.1.1.5.2 Redundancy and/or physical dispersion levels
3.4.1.1.2.1.1.6 Geographic coupling
3.4.1.1.2.1.1.7 Politico-Socio Coupling
3.4.1.1.2.1.1.7.1 National Affiliation
3.4.1.1.2.1.1.7.2 Language
3.4.1.1.2.1.1.7.3 Currency
3.4.1.1.2.1.1.7.4 Legal Constraints
3.4.1.1.2.1.1.7.5 Service Labeling
3.4.1.1.2.1.1.8 Cloud Operations & Support Model
3.4.1.1.2.1.1.9 Cloud Application Lifecycle Governance
3.4.1.1.2.1.1.10 Legacy Application Migration
3.4.1.1.2.1.2 Organizational Policy
3.4.1.1.2.1.2.1 Two/Three Party Relationships
3.4.1.1.2.1.2.2 Funding Model & Incentives
3.4.1.1.2.1.2.2.1 Fee for Service
3.4.1.1.2.1.2.2.1.1 Commercial Services
3.4.1.1.2.1.2.2.1.2 Public Service
3.4.1.1.2.1.2.2.1.3 Private Service
3.4.1.1.2.1.2.2.2 Required Service Model
3.4.1.1.2.1.2.2.3 Community Contributor Model
3.4.1.1.2.1.2.2.4 Legacy Consolidation Model
3.4.1.1.2.1.2.2.5 Insurance Model
3.4.1.1.2.1.2.2.6 Charity for Goodwill
3.4.1.1.2.1.2.3 Lifecycle Governance
3.4.1.1.2.1.2.4 Policy Enforcement Framwork
3.4.1.1.2.1.2.5 Cloud Management & Monitoring
3.4.1.1.2.1.2.6 Organizational Culture & Behavoir
3.4.1.1.2.2 Cloud Deployment Dimensions
3.4.1.1.2.2.1 Resource Management
3.4.1.1.2.2.1.1 Resource Allocation
3.4.1.1.2.2.1.2 Granularity
3.4.1.1.2.2.1.3 Resource Type Decomposition
3.4.1.1.2.2.1.4 Decision Drivers
3.4.1.1.2.2.1.5 Decision Responsiveness
3.4.1.1.2.2.2 Resource Ownership
3.4.1.1.3 Cloud Enablement Dimensions
3.4.1.1.3.1 Business/Mission Tier
3.4.1.1.3.1.1 Scalability
3.4.1.1.3.1.2 Ownership
3.4.1.1.3.1.3 Capacity
3.4.1.1.3.1.4 Dynamic Range
3.4.1.1.3.1.5 Operational Visibility
3.4.1.1.3.1.6 Platform Tier Coupling
3.4.1.1.3.1.7 "OS" Tier Coupling
3.4.1.1.3.1.8 Virtualization Tier Coupling
3.4.1.1.3.1.9 Physical Tier Coupling
3.4.1.1.3.1.9.1 Sensor networking
3.4.1.1.3.1.9.2 Process Control
3.4.1.1.3.1.10 Domain Specificity
3.4.1.1.3.1.11 Mission Service Resources
3.4.1.1.3.1.12 Data Resources
3.4.1.1.3.1.13 Business/Mission Services
3.4.1.1.3.1.13.1 Email
3.4.1.1.3.1.13.2 Business/Mission Applications
3.4.1.1.3.1.13.3 Enterprise Applications
3.4.1.1.3.1.13.4 Desktop Software
3.4.1.1.3.1.13.5 Business Utilities
3.4.1.1.3.1.13.6 DaaS/KaaS
3.4.1.1.3.1.13.7 Business Processes as a Services
3.4.1.1.3.2 Platform Tier
3.4.1.1.3.2.1 Scalability
3.4.1.1.3.2.2 Ownership
3.4.1.1.3.2.3 Capacity
3.4.1.1.3.2.4 Dynamic Range
3.4.1.1.3.2.5 Operational Visibility
3.4.1.1.3.2.6 Functionality
3.4.1.1.3.2.6.1 General Purpose Services
3.4.1.1.3.2.6.1.1 Search Services
3.4.1.1.3.2.6.1.2 Semantic Interoperability Services
3.4.1.1.3.2.6.1.3 SOA Enablement Services
3.4.1.1.3.2.6.1.4 Application container services
3.4.1.1.3.2.6.1.5 Application hosting and runtime services
3.4.1.1.3.2.6.1.6 Web application and content hosting & delivery services
3.4.1.1.3.2.6.1.7 Messaging, mediation, intgration services
3.4.1.1.3.2.6.1.8 Developer resources
3.4.1.1.3.2.6.2 Functional Domain Services
3.4.1.1.3.2.6.2.1 Retail Storefront Services
3.4.1.1.3.2.6.2.2 Business function services
3.4.1.1.3.2.6.2.3 Records management services
3.4.1.1.3.2.6.2.4 Dynamic/Short Lived Services (Tactical)
3.4.1.1.3.2.6.2.5 Other Enterprise Services
3.4.1.1.3.2.6.3 SIngle purpose services
3.4.1.1.3.3 "OS" Tier
3.4.1.1.3.3.1 Scalability
3.4.1.1.3.3.2 Ownership
3.4.1.1.3.3.3 Capacity
3.4.1.1.3.3.4 Dynamic Range
3.4.1.1.3.3.5 Operational Visibility
3.4.1.1.3.3.6 Functionality
3.4.1.1.3.3.6.1 Vitualization Technology
3.4.1.1.3.3.6.2 SOA Enablement Technology
3.4.1.1.3.3.6.3 Chargeback and Financial Integration
3.4.1.1.3.3.6.4 Load Balancing & Performance Assurance
3.4.1.1.3.3.6.5 Monitoring, management and SLA enforcement
3.4.1.1.3.3.6.6 Resource provisioning and management
3.4.1.1.3.3.6.7 Billing & Metering
3.4.1.1.3.3.6.8 Onboarding and offboarding automation
3.4.1.1.3.3.6.9 Security and privacy tools/controls
3.4.1.1.3.3.6.10 Cloud Pattern enablement tools
3.4.1.1.3.3.6.11 Cloud workflow, process management and ochestration tools
3.4.1.1.3.4 Virtualization Tier
3.4.1.1.3.4.1 Scalability
3.4.1.1.3.4.2 Ownership
3.4.1.1.3.4.3 Capacity
3.4.1.1.3.4.4 Dynamic Range
3.4.1.1.3.4.5 Operational Visibility
3.4.1.1.3.4.6 Functionality
3.4.1.1.3.4.6.1 Network Services
3.4.1.1.3.4.6.1.1 Bandwidth
3.4.1.1.3.4.6.1.2 Latency
3.4.1.1.3.4.6.1.3 Assymetries
3.4.1.1.3.4.6.1.4 Mobility
3.4.1.1.3.4.6.1.5 Network Entity Reach
3.4.1.1.3.4.6.1.6 Capacity
3.4.1.1.3.4.6.1.6.1 Provider/Consumer Network Capacity
3.4.1.1.3.4.6.1.6.2 Internal Network Capacity
3.4.1.1.3.4.6.1.6.3 Nodal Capacity
3.4.1.1.3.4.6.2 Storage Services
3.4.1.1.3.4.6.2.1 Persistance
3.4.1.1.3.4.6.2.2 Access Speed Tiering
3.4.1.1.3.4.6.3 Compute Platform Resources
3.4.1.1.3.4.6.3.1 Intel Instruction Set
3.4.1.1.3.4.6.3.2 PowerPC
3.4.1.1.3.4.6.3.3 Small platform set
3.4.1.1.3.4.6.3.3.1 Smartphone
3.4.1.1.3.4.6.3.3.2 PDA
3.4.1.1.3.4.6.3.4 High Performance Platforms
3.4.1.1.3.4.6.3.4.1 Very large word size
3.4.1.1.3.4.6.3.4.2 Massively parallel processors
3.4.1.1.3.4.6.3.5 User Delivery Requirements
3.4.1.1.3.4.6.3.6 OS Types Supported
3.4.1.1.3.4.6.3.7 Fault Tolerance
3.4.1.1.3.4.6.3.8 Application Types
3.4.1.1.3.4.6.4 Security Resources
3.4.1.1.3.4.6.5 Other Virtualized Resources
3.4.1.1.3.5 Physical Tier
3.4.1.1.3.5.1 Scalability
3.4.1.1.3.5.2 Ownership
3.4.1.1.3.5.3 Capacity
3.4.1.1.3.5.4 Dynamic Range
3.4.1.1.3.5.5 Operational Visibility
3.4.1.1.4 Integrated Resource Management / Enterprise Resource Planning
3.4.1.1.4.1 Data/Service Related to Cloud Computing
3.4.1.1.4.2 Relevant Reference/Scope Issue
3.4.1.1.5 Customer Relationship Management
3.4.1.1.5.1 Data/Service Related to Cloud Computing
3.4.1.1.5.2 Relevant Reference/Scope Issue
3.4.1.1.6 Billing
3.4.1.1.6.1 Data/Service Related to Cloud Computing
3.4.1.1.6.2 Relevant Reference/Scope Issue
3.4.1.1.7 Demand Forecasting
3.4.1.1.7.1 Data/Service Related to Cloud Computing
3.4.1.1.7.2 Relevant Reference/Scope Issue
3.4.1.1.8 Network Management
3.4.1.1.8.1 Data/Service Related to Cloud Computing
3.4.1.1.8.2 Relevant Reference/Scope Issue
3.4.1.1.9 Application Management
3.4.1.1.9.1 Data/Service Related to Cloud Computing
3.4.1.1.9.2 Relevant Reference/Scope Issue
3.4.1.1.10 Data Rights Management
3.4.1.1.10.1 Data/Service Related to Cloud Computing
3.4.1.1.10.2 Relevant Reference/Scope Issue
3.4.1.1.11 Regulatory Compliance Management
3.4.1.1.11.1 Data/Service Related to Cloud Computing
3.4.1.1.11.2 Relevant Reference/Scope Issue
3.4.1.1.12 Cybersecurity
3.4.1.1.12.1 Data/Service Related to Cloud Computing
3.4.1.1.12.2 Relevant Reference/Scope Issue
3.4.1.2 Capability/Domain Independent Scope
3.4.1.3 Net-Readiness
3.4.1.4 Technical/Economic Feasibility
3.4.1.5 General
3.5 Economic Analysis
3.5.1 Cost of Status Quo
3.5.2 Cost of IaaS
3.5.3 Development / PaaS Cost
A1 Kevin L. Jackson

More Maps From User