DevSecOps

AI

Search

TechTrend Cloud Assessment Process

A well-developed cloud‐computing assessment approach will help to accelerate and advance the adoption of cloud computing while simultaneously enabling tangible benefits to be captured and attained.

Share this

White Paper

Synopsis

A well-developed cloud‐computing assessment approach will help to accelerate and advance the adoption of cloud computing while simultaneously enabling tangible benefits to be captured and attained. This paper discusses a proven structured engineering approach and key points for an agency to consider in order to successful conduct a cloud assessment for a given enterprise.

Feasibility Study

Before an agency considers moving components to the cloud, it must first determine if the components are amenable to a virtualized cloud deployment and secondly to which of the myriad of cloud deployment options should it be migrated. Migration to cloud has numerous potential benefits including lower costs, the ability to adapt to changing demands, and the ability to quickly provision additional servers for effective testing. An agency that consolidates data center IT capabilities can realize significant long term benefits in cost reduction due to savings in space, power, cooling, maintenance support, and reduced investment in new capabilities. Such efforts are inhibited when extensive and detailed information about the current environment is lacking. Servers with heterogeneous configurations, distribution, ownership, and management greatly magnify the challenge and increase risk.

Successful migration to a new infrastructure hosting environment requires depth and breadth of expertise to minimize disruptions to mission critical business operations and enable significant cost savings.

Understanding of the Requirements. Any major enterprise transformation effort depends on having complete, accurate, and pertinent information available to be successful in the transformation, especially information to effectively and efficiently execute in a timely fashion. Detailed knowledge is needed about the inventory, configurations, utilization/ workloads, interdependencies, and mission functionality of the technical components. Such detailed knowledge is necessary to map and transition as-is workloads to an optimized and cost effective IT environment, especially if a migration to consolidated IT capabilities is extended to include virtualization, IT optimization, or cloud capabilities.

Cloud computing will help an agency address these challenges:

Challenges Benefit
Computing capacity to meet peak demand and to avoid paying for underutilized infrastructure Leverage Cloud Computing elasticity capability and pay-per-use model to address the need for peak demand while avoid capital expenditure that results in underutilized physical assets
Reduce operational costs and end-of-life/aging IT assets Migrate applications to the Cloud Computing virtualized environment, utilized self-service provisioning, and procure bundled managed services up to the operating systems level
Support Agency Cloud Vision and adopt an outsource model of buying services Consolidate and decommissioned underutilized data centers and physical servers and migrate applications to the cloud environment

Cloud Assessment Process

TechTrend first performs a Cloud Suitability Assessments (CSA) to determine if a legacy system is a candidate for migration to the cloud. If the candidate system is cloud suitable, we then option out which cloud deployment model (e.g. private, public, hybrid, government community) and which cloud delivery model (IaaS, PaaS, SaaS) is the best combination to meet the candidate’s technical and business requirements.

Our CSA process considers these criteria:

CSA Evaluation Criteria Definition
Life-cycle Phase of Current Computing Infrastructure The tech refresh cycle or the timeframe for when new hardware is procured and impact on Return on Investment. Also the life-cycle of the new hardware before it reaches end-of-life or no longer supported by the vendor.
Telecom availability, bandwidth, and latency to the Cloud Telecomm requirement to ensure adequate application performance and sufficient bandwidth to handle bursting during peak usage
Business Impact Assessment The system criticality, RTO, RPO, fault tolerant, and disaster recovery/COOP which result in a reliability, maintainability and availability requirement
Cost to Migrate and Adapt to a Virtualized Environment The As-Is to To-Be costs of cloud feasibility study, actual migration, and life-cycle costs of the cloud computing environment
Security and Privacy FISMA level, data access, data spillage, data integrity for at rest/in motion/in process, PII and other security controls that must be handled by the cloud infrastructure
Customization Level The amount of customization and uniqueness that drives the cloud deployment model
Scalability/Elasticity, API, On-line Provisioning Self-provisioning, automated scalability to support varying demand by the system

Once we’ve assessed that target system is cloud applicable, we then determine the cloud implementation alternative based on four criteria: use scenario, security, cost, and risk.

Delivery Model Description Deployment Model Cost Risk Security
IaaS Provision computing resources to deploy arbitrary software, OS and applications/data Private Highest Lowest Most Secure
PaaS Create custom applications using the provider’s programming tools Government Community Moderate Moderate Moderately Secure
SaaS Use provider’s
application on the cloud
Public Lowest Highest Least Secure
Hybrid Dependent on the composition of two or more distinct cloud infrastructure

CSA Project Deliverables. Our approach will yield:

  1. Feasibility study investigating cloud options and capabilities.
  2. Risk issues and suggested mitigations.
  3. A suggested migration plan. This plan would include:
    1. Building an inventory of IT assets and applications and developing a plan to consolidate resources into a shared services environment as appropriate
    2. Determination of which applications can be virtualized and how to maintain those that cannot be virtualized.
  • Elimination of duplicative IT services and consolidation of assets, specifically servers, and networks
    1. Automated software configuration tools for server setup

Process to Identify Risks and Suggested Mitigations

Risks and Issues. The issues to be addressed include:

  1. Potential hosting sites – migration options
  2. Price trade-offs
  3. Security and Privacy
  4. Interfaces with external data centers and systems
  5. Operations
  6. Back-up plan and disaster recovery plan

Potential Hosting Sites – Migration Options. Cloud service providers should be considered based on technical and business requirement, security compliance, capacity, and future innovation.

Security. One key requirement is Federal Information Security Management Act of 2002 (FISMA) compliance. Achievement and maintenance of FISMA can be complicated, so this will narrow down the field of potential cloud service providers. For those service providers that advertise FISMA compliance, a close assessment should be made of the level of FISMA compliance (Low, Medium, High), the number of clients supported, the number of years of experience and the number of certified security professionals on staff. Another critical security compliance consideration is FedRAMP. Outside of a private cloud, commercial cloud service providers must implement all FedRAMP processes and security controls.

Capacity: Although virtualization reduces the footprint required to provide an equivalent computer capacity versus a non-virtualized environment, organic growth and new program hosting requirements will slowly eat away at the data center physical capacity. Since migration is an expensive process, the agency must consider long-term capacity impact of the cloud service provider. Commercial providers have much more capacity than an agency.

Future Innovation. To take full advantage of the cloud environment, the agency will ultimately want to move to a virtualized computing, networking and storage environment. The virtualized storage environment is often a multi-tier architecture composed of fast flash drives, SAN block storage, Network-Attached Storage and tape drives. With certain storage controllers, active data that is more frequently and more recently accessed is automatically and non-disruptively move to the faster storage tier, while data not frequently access is moved to slower, more economical storage tiers. Storage solutions can be designed to handle hundreds of terabytes and to readily expand in to the petabyte range with the same infrastructure.

Depending on how the application is designed, elasticity can be accomplished by provisioning more resources (CPUs, memory, storage) to existing server instances, or by provisioning more instances of particular a server type (e.g., Web servers, application servers, database servers). Images of the server configurations are stored in the cloud provisioning tool, so cloning a server is very rapid. When the demand surge for resources subsides, the unused resources are de-provisioned and returned to the resource pool.

Different demand patterns can be managed in various manners. Seasonal demands, such as recreational activities, can be met by provisioning additional elastic resources during the peak months and returning to the baseline level afterwards. By analyzing the demand pattern and trends, provisioning can be planned in advance. Because the agency only pays for the resources used, elasticity offers cost savings benefits.

Price Trade-off. Cloud hosting providers compete on capabilities, customer service, and price of offerings. Not only must an agency perform a price trade-off between provider cloud costs, they must consider the cost of continuing to maintain its own hosting infrastructure. The costs and benefits to evaluate include:

  • Current IT Costs: Staff expense, Facilities expense, Systems, Telecom/Network and Other cost categories
  • Retained Costs: A subset of the Current IT Costs that will remain during and after your move to the Cloud
  • Additional Costs: Additional functionality your IT staff will support that, prior to the Cloud, wasn’t required
  • Cloud Vendor Costs: Costs from your Cloud Vendor to host your IT applications
  • Network Vendor Costs: Costs from Network Vendors to support your IT applications in the Cloud
  • Cloud Benefits: Benefits that companies realize from hosting applications in the Cloud (pay for what you use, rapid provisioning/de-provisioning, elasticity, no procurement, no hardware, software, operating system maintenance)
  • Migration Costs: Duplicate hardware/software in the target data center used to mirror source data; Duplicate hardware/software in source data center used to mirror source da-ta; Cloud test/development/staging area environment before moving to the production cloud computing environment
  • Additional Cloud Requirements: security; interfaces with external data centers and systems; backup & recovery, etc.

Security and Privacy. Security and privacy capabilities are crucial to consider when selecting a cloud service provider.

Interfaces with external data centers and systems. Interfaces with external data centers and systems, should be designed to meet the requirements for security (e.g., access levels, encryption), functional (e.g., protocols) and performance (e.g., response time, transaction volumes and frequency of transactions). An agency must work to define and implement access control and firewall rules for secure connections to the external data centers and systems, including firewall changes. Adequate bandwidth shall be designed to meet the performance requirements. Prior to the switch over to production, comprehensive testing shall be performed to assure the implementation meets the requirements.

Operations. An agency must select a service provider whose managed hosting strategy leverages mature processes, advancing technology, and economies of scale. The equipment should be kept current, according to technical refresh recycles that are normal to the industry. The server, storage and network environment must be designed to support the Service Level Agreements (SLAs). A redundant architecture should be implemented and single points of failure should be eliminated. To further support the SLAs, the service provider should have implemented a toolset including market-leading products for systems management, network management, monitoring, configuration management, problem management, change management, and release management.

Backup Plan and Disaster Recovery Plan. An agency should select an IT service providers that provide disaster recovery or Continuity of Operations Plan (COOP) through client deployment at multiple data centers. The service provider’s facilities should be designed to meet at least a FISMA Moderate security baseline for the Physical and Environmental Information Assurance control family. Data backup should be performed at least weekly, or to the backup frequency specified by the client. Disaster recovery plans should specify the mission resumption time and include business recovery plans, system contingency plans, facility disaster recovery plans, and plan acceptance. Disaster recovery or COOP exercises should be scheduled annually or semi-annually per client specifications.

Process to Produce Migration Plan

Migration Plan. The migration plan contains Inventory Verification and Discovery, Transformation Planning and Analysis, Transformation Design and Implementation Planning, and Implementation and Test, as follows:

  • Inventory Verification and Discovery. During the Inventory Verification and Discovery phase, we will build an inventory of IT assets and applications. In addition to completing a client questionnaire and conducting client interviews, we install agents on the servers to automatically gather information. Utilization is measured over time for each system.
  • Transformation Planning and Analysis. The baseline information provided by the Discovery activities is input for the Analysis and Macro Design. TechTrend uses tools to automate the analysis, resulting in the architecture of the current and target environments. The target architecture and configuration is designed and the components of the current environment are mapped to target environment. This mapping indicates what resources are consolidated and which resources can be eliminated.
  • Transformation Design and Implementation Planning. The Implementation plan identifies the order of system and application migration based on business needs and priorities. Scheduling constraints are incorporated into the Implementation plan, including in-flight projects, software licensing and critical peak usage periods. It is important that a thorough user acceptance test plan be created to ensure the migration is complete and successful. The user acceptance test defines how much testing is sufficient before testing is completed and production can begin. The user acceptance tests should include all of the necessary functional and non-functional (e.g., operational, performance) tests.
  • Implementation and Testing. The Implementation activities will typically include a pilot deployment. Following the successful completion of user acceptance testing, the pilot is ready for production. Upon successful completion of the pilot, migration of other systems and applications will follow. As systems and applications are migrated to the cloud environment, the current environment is incrementally shutdown.
  • Building an inventory of IT assets and applications and developing a plan to consolidate resources into a shared services environment as appropriate. During the Inventory Verification and Discovery phase, we will build an inventory of IT assets and applications. We collect and verify characteristics of the set of applications, computing hardware, operating systems, storage environment, network topology, security/audit/regulatory constraints, client cost of ownership and other client business drivers. In addition to completing a client questionnaire and conducting client interviews, we install agents on the servers to automatically gather information on the hardware, software and network or leverage existing agents that an agency already owns. We also run affinity monitoring to capture server and application dependencies. Utilization (e.g., CPU, memory, storage, network bandwidth) is measured over time for each system.

The baseline information provided by the Inventory Verification and Discovery activities is input for the Transformation Planning and Analysis. An application baseline profile includes the characteristics, utilization and requirements for computing, memory, storage, networking, database access, application dependencies, programming language, compiler, operating system extensions, device drivers, security and high availability. An analysis of the application and system profiles, workload characteristics and dependencies will lead to developing a plan to consolidate resources into a shared services environment as appropriate and elimination of duplicative IT services and consolidation of assets, specifically servers and networks.

  • Determination of which applications can be virtualized and how to maintain those that cannot be virtualized. Applications need to be assessed for suitability for physical to virtual migrations, based on several key factors such as vendor support, resource usage, and legacy concerns. In general, most applications are virtualizable, with exceptions where support agreements do not allow running in a virtual environment, where the application makes use of specific attached hardware such as fax/modems, and servers that have extreme usage of server resources. Applications should be protected in the existing physical environment while they undergo acceptance testing inside the new virtual environment to determine if they operate correctly, dependent applications and data sources operate correctly, and that the application is correctly working with virtualization technologies such as virtual migrations. Applications that include databases also often need special considerations inside the virtualization environment such as affinity rules and direct resource mappings to maintain high IO performance. The great majority of application that run on a physical server can run on a virtual server.

The first step is to check with the software vendor / support vendor for each application, followed by checking with the virtualization provider for compatibility. If not already available, environments need to be configured, tested, and ready for production. Next, the application would have to be installed and configured and staged in the application virtualization environment and tested for functionality. Common interfaces such as Web browsers and terminals will virtualize correctly, and each application would have to be configured in the terms of its new environment. For instance client-server applications that would once transverse a firewall or network boundary now do not leave the datacenter. Just the “session” is being pushed to the client so firewalls, network, and security configurations would need to take that into account. The effort to virtualize applications may be well worthwhile to enable access via mobile and Android devices

  • Determine which systems are not good candidate for migration to a remote data center. Some systems or system components / services may not be a good candidate for migration. When an application is not suitable for virtualization special consideration needs to be made in support of the still physical server environment of the application, it dependencies, and its impact on the infrastructure environment; especially as the majority of systems become virtual in the enterprise. Such considerations are vendor support for the server hardware, its life cycle, disaster recovery/COOP and its interaction with virtualized systems, and its suitability to be phased out with a virtualization compatible application to meet the same business objectives. To maintain those that cannot be virtualized, or is not suitable to be virtualized, a service provider’s managed services can be used, with the benefits of economy of scale and shared used of the service management infrastructure. Based on our experience in past migrations we understand which types of systems are poor candidates.
Type of Systems Rationale
Specific local service Systems that provide a specific local service (e.g. fax servers, print servers, physical access control servers, AD/DC,DNS, local desktop management servers) due to reduced performance, increased latency for requests, and continuous availability in the event of a disaster. Can be virtualized locally.
Not remotely manageable Legacy systems which are not scalable or are not remotely manageable due to significant investment in modernization and out-weighs the cost savings of consolidating an end of life application. Can be virtualized locally.
High Transaction or Big Data systems High transaction systems or big data (e.g. MS SQL, Oracle) may not benefit from virtualization due to limited performance gains
Bandwidth intensive Bandwidth hunger between end-users and systems due to limited and costly WAN bandwidth (maybe mitigated by implementing WAFS or a WAN accelerator)
Non-supported platform Target provider does not offer that hosting platform, but will need co-location, which can be more expensive than virtualization
Licensing constraint Licensing concerns due to contracting terms and conditions that prevents the licenses or systems from being used at another location or managed by another entity

Elimination of duplicative IT services and consolidation of assets, specifically servers and networks. Following the collection of the asset inventory and an analysis of the results, a complete picture of the current environment is obtained. Each application is profiled, including their functions, dependencies on other applications and databases, the server it is running on, the server utilization, the network placement, the network utilization and other characteristics. With this information, along with the business and technical requirements, the target architecture and configuration are designed and the components of the current environment are mapped to target environment. This mapping indicates what services and resources are consolidated and which services and resources can be eliminated. A technical review is held with the agencies business and technical stakeholders to confirm that all of the requirements are met with the target environment. The stakeholders also approve the elimination of duplicative IT services and consolidation of assets, specifically servers and networks, as represented in the target environment.