Thursday, September 18, 2014

IBM V7000 Overview


IBM V7000 overview

The IBM V7000 system is a virtualizing RAID storage system.

IBM V7000 software

The IBM V7000 software provides these functions for the host systems that attach to Storwize V7000
·         Creates a single pool of storage
·         Provides logical unit virtualization
·         Manages logical volumes
·         Mirrors logical volumes
The Storwize V7000 system also provides these functions:
·         Large scalable cache
·         Copy Services
·         IBM Flash Copy (point-in-time copy) function, including thin-provisioned Flash Copy to make multiple targets affordable
·         Metro Mirror (synchronous copy)
·         Global Mirror (asynchronous copy)
·         Data migration
·         Space management
·         IBM System Storage® Easy Tier® to migrate the most frequently used data to higher-performance storage
·         Metering of service quality when combined with IBM Tivoli® Storage Productivity Center
·         Thin-provisioned logical volumes
·         Compressed volumes to consolidate storage
 
IBM V7000


The IBM V7000 product combines hardware and software to control the mapping of storage into volumes in a SAN environment. The Storwize V7000  system provides many benefits to storage administrators, including simplified storage administration, integrated management of IBM servers and storage, and enterprise-class performance, function and reliability.
 The IBM V7000  product includes rack-mounted units called enclosures. Each enclosure includes two canisters, which can be 12-drive or 24-drive models, and two power supplies. There are two types of enclosures: control and expansion. A system can support more than one control enclosure, and a single control enclosure can have several expansion units attached to it.
 The IBM V7000 system also includes an easy-to-use product management GUI, which helps you to configure, troubleshoot, and manage the system.
 This combination of hardware and software provides storage virtualization capabilities, where you can manage physical resources as shared virtual resources. In this way, all the internal and external physical storage appears to the hosts as virtual storage, which can be used to centrally manage and allocate capacity as needed.
 Here is how it works. Enclosures include physical drives that are logically grouped into Redundant Arrays of Independent Disks, or RAID.
 Instead of mapping to hosts directly, the arrays present groups of managed disks to the system to be included in a pool of virtual storage. The storage pool can include disks from either internal or external storage arrays.
 You can create storage pools based on performance and other characteristics.
 Node canisters are always installed in pairs as part of a control enclosure. Each control enclosure represents an I/O group. Any expansion enclosures that are attached to a specific control enclosure also belong to the same I/O group.
 Each I/O group translates the disks in a storage pool into one or more volumes that are presented to a host system.
You have the ability to create different types of volumes, including mirrored and thin-provisioned.
 With mirrored volumes, there are two volume copies, and the host is only aware of the original volume. Mirrored volumes can enable a volume to remain online even when some of the associated storage systems cannot be accessed.
 Thin-provisioned volumes are volumes with virtual storage that exceeds real storage. When additional real storage is required, you can manually or automatically expand the real storage.
 After the volumes are created, you can specify which hosts can access the volumes.
 In addition to providing virtualization capabilities, the system also provides advanced SAN functions, including data migration, Easy Tier storage, and Copy Services. You typically migrate data to move workloads from external storage systems that are about to be replaced to a Storwize V7000 system. Data migration is performed without interruption to the host I/O.
 Volumes are created by mapping disk extents to volume extents. Data migration essentially changes this mapping. Migration can be performed at the volume, disk or the extent level, depending on the purpose of the migration.
 The Easy Tier feature provides performance and cost benefits by analyzing your workload performance trends to identify the most frequently accessed data. That data is then automatically stored on high-performance solid-state drives, while the remainder of the data is stored on more affordable hard disk drives.
 The IBM V7000 system also provides several types of Copy Services that help you to migrate, back up, and recover data. It does this by creating synchronous and asynchronous copies of volumes. These include FlashCopy(R), Metro Mirror, and Global Mirror Copy Services.
 The FlashCopy feature copies data instantaneously from a source volume to a target volume. This copy is taken at a particular point in time as hosts continue to access the data. You must create a mapping between the source volume and the target volume. A mapping can be created between any two volumes of the same size in a clustered system. FlashCopy consistency groups perform point-in-time copy functions across multiple volumes. You can set up FlashCopy mappings and consistency groups using the management GUI.
 Metro Mirror is a Copy Service that provides a continuous, synchronous mirror of one volume to a second volume. The secondary volumes can be located in the same clustered system or in different clustered systems. The different systems can be up to 300 kilometers apart, so by using Metro Mirror you can make a copy to a location offsite or across town. Because the mirror is updated in real time, no data is lost when a failure occurs, so Metro Mirror is generally used for disaster-recovery purposes, where it is important to avoid data loss.
 Global Mirror is a Copy Service that is very similar to Metro Mirror. Both provide a continuous mirror of one volume to a second volume. But with Global Mirror, the copy is asynchronous. You do not have to wait for verification to complete, so for long distances performance is improved compared to Metro Mirror. However, if a failure occurs, you might lose data. Global Mirror works well for data protection and migration when recovery sites are more than 300 kilometers away.
Before creating a Metro Mirror or Global Mirror copy, you first need to establish a partnership between two clustered systems using the management GUI.
Configuration details
Storage area network (SAN) configurations that contain Storwize V7000 nodes must be configured correctly.
A SAN configuration that contains Storwize V7000 nodes must follow configuration rules for the following components:
·         Storage systems
·         Nodes
·         Fibre Channel host bus adapters (HBAs)
 Note: 
     If the system has an FC adapter fitted, some host systems can be directly attached without using a SAN switch. Check the support pages on the product website for the current details of supported host OS / driver / HBA types.
·         Converged network adapters (CNAs)
·         Fibre Channel switches
·         iSCSI Ethernet ports
·         Fabrics
·         Zoning

Storwize V7000 hardware

The Storwize V7000 storage system consists of a set of drive enclosures. Control enclosures contain disk drives and two node canisters. A collection of control enclosures that are managed as a single system is a clustered system. Expansion enclosures contain drives and are attached to control enclosures. Expansion canisters include the serial-attached SCSI (SAS) interface hardware that enables the node canisters to use the drives of the expansion enclosures.


Storwize V7000 system as a traditional RAID storage system. The internal drives are configured into arrays, and volumes are created from those arrays.
The two node canisters in each control enclosure are arranged into pairs known as I/O groups. A single pair is responsible for serving I/O on a given volume. Because a volume is served by two node canisters, there is no loss of availability if one node canister fails or is taken offline.
The Storwize V7000 system supports both regular and solid-state drives (SSDs). In addition, a Storwize V7000 system without any internal drives can be used as a storage virtualization solution.

System management

The Storwize V7000 nodes in a clustered system operate as a single system and present a single point of control for system management and service. System management and error reporting are provided through an Ethernet interface to one of the nodes in the system, which is called the configuration node. The configuration node runs a web server and provides a command-line interface (CLI). The configuration node is a role that any node can take. If the current configuration node fails, a new configuration node is selected from the remaining nodes. Each node also provides a command-line interface and web interface for performing hardware service actions.

Fabric types

I/O operations between hosts and Storwize V7000 nodes and between Storwize V7000 nodes and RAID storage systems are performed by using the SCSI standard. The Storwize V7000 nodes communicate with each other by using private SCSI commands.
FCoE connectivity is supported on Storwize V7000 models 2076-312 and 2076-324, after the system software has been upgraded to version 6.4.


Canisters
Canisters are hardware units that are subcomponents of enclosures.
The system has two types of canisters: node canisters and expansion canisters. A node canister provides host interfaces, management interfaces, and SAS interfaces to the control enclosure. A node canister has the cache memory, the internal drives to store software and logs, and the processing power to run the system's virtualizing and management software. An expansion canister provides the serial-attached SCSI (SAS) connectivity to the drives in an expansion enclosure. Each enclosure contains a pair of canisters to provide redundancy. The canister in the upper slot is identified as Canister 1. The inverted canister in the lower slot is identified as Canister 2.

Symmetric virtualization

When used as an external storage system, Storwize® V7000 provides symmetric virtualization.
Virtualization splits the storage that is presented by the storage systems into smaller chunks that are known as extents. These extents are then concatenated, using various policies, to make volumes. With symmetric virtualization, host systems can be isolated from the physical storage. Advanced functions, such as data migration, can run without the need to reconfigure the host. With symmetric virtualization, the virtualization engine is the central configuration point for the SAN.

Host mapping

Host mapping is the process of controlling which hosts have access to specific volumes within the system.
Host mapping is similar in concept to logical unit number (LUN) mapping or masking. LUN mapping is the process of controlling which hosts have access to specific logical units (LUs) within the disk controllers. LUN mapping is typically done at the storage system level. Host mapping is done at the Storwize® V7000 level.
The act of mapping a volume to a host makes the volume accessible to the WWPNs or iSCSI names such as iSCSI qualified names (IQNs) or extended-unique identifiers (EUIs) that are configured in the host object.

Volumes and host mappings

Each host mapping associates a volume with a host object and provides a way for all WWPNs and iSCSI names in the host object to access the volume. You can map a volume to multiple host objects. When a mapping is created, multiple paths might exist across the SAN fabric or Ethernet network from the hosts to the nodes that are presenting the volume. Without a multipathing device driver, most operating systems present each path to a volume as a separate storage device. The multipathing software manages the many paths that are available to the volume and presents a single storage device to the operating system. If there are multiple paths, the system requires that the multipathing software run on the host.
Note:  

The iSCSI names and associated IP addresses for the nodes can fail over between nodes in the I/O group, which negates the need for multipathing drivers in some configurations. Multipathing drivers are still recommended, however, to provide the highest availability.
When you map a volume to a host, you can optionally specify a SCSI ID for the volume. This ID controls the sequence in which thevolumes are presented to the host. Check the host software requirements for SCSI IDs because some require a contiguous set. For example, if you present three volumes to the host, and those volumes have SCSI IDs of 0, 1, and 3, the volume that has an ID of 3 might not be found because no disk is mapped with an ID of 2. The clustered system automatically assigns the lowest available SCSI ID if none is specified.
 LUN masking is usually implemented in the device driver software on each host. The host has visibility of more LUNs than it is intended to use, and device driver software masks the LUNs that are not to be used by this host. After the masking is complete, only some disks are visible to the operating system. The Storwize V7000 can support this type of configuration by mapping all volumes to every host object and by using operating system-specific LUN masking technology. The default, and recommended, Storwize V7000 behavior, however, is to map to the host only those volumes that the host requires access to.
  





2 comments: