Purpose:
This document is meant as a quick reference guide to best practice recommendations for FAST VP. Please review the associated product guide and best practices white paper for a more detail discussion on various FAST/VP configuration and options. The online help for Unisphere for VMAX and SMC are also good references for procedures to setup and configure FAST VP.
General Recommendations
- Bind all tdevs to the Fibre Channel tier
- Do not Preallocate tdevs. Preallocated, but unwritten, extents will be demoted to lowest tier
- Set “Max. Subscription Percent” for the EFD and SATA pool to 0, to prevent accidental binding to the EFD and SATA pool
- Set “Pool Reserved Capacity” for the EFD and SATA pool to 1%,
- Fast/VP can be enabled on the frame prior to data migrations,
- Associate storage groups for migrated hosts after the migration cutover is complete.
- Defining and managing less FAST VP policies are generally better than more policies.
- Leverage tier advisor or presales sizing to get a starting point of to define a initial FAST VP Policy
- The following FAST VP policies can also be used as a initial starting point
- Gold Policy => EFD 15%, FC 90%, SATA 60%,
- Silver Policy => EFD 10%, FC 80%, SATA 80%
- Bronze Policy => EFD 5%, FC 40%, SATA 100%
- Start by placing all storage groups in the
Silver policy
- Check FAST VP compliance reports to determine
tier usage and performance requirements for the associate SG
- Promote SG that require more EFD to the Gold Policy
- Check FAST VP compliance reports to determine
tier usage and performance requirements for the associate SG
- Look to set storage group IO limits on QA/Dev
hosts to limit SATA pool drive utilization.
- To free up space for subsequent migrations. 1st look for less critical hosts that can be moved to a lower policy.
- For data migrations with multiple cutover waves
to the frame:
- Prior to initiating data copy, ensure there is enough spare capacity in the FC pool to receive the full capacity of the migration wave.
- Review VP pool response time and spindle utilization prior to starting to confirm existing FAST VP policy tier % makes sense for your environment. Adjust tier % as required to maintain a healthy balance between FC pool consumption and SATA pool spindle utilization.
- Setting a higher “Pool Reserved Capacity” (ie. 20-30%) may also be used to free up capacity in the pool. Use with care, as this will potentially cause “warm” extents to be pushed to the SATA pool utilization
- EFD pool % in policies can be increased after all migrations are completed to optimize performance
Performance time window
- Use default, performance metrics are collected 24 hours a day, every day
Data movement time window
- Allow FAST VP to perform movements 24 hours a day, every day.
The Workload Analysis Period (WAP)
- A longer WAP will factor less-recent host activity into FAST VP promotion/demotion scores.
- Shorter WAP, allows FAST VP to react to changes more quickly, but may lead to greater amounts of data being moved between tiers.
- Use default WAP (168 hrs)
The Initial Analysis Period (IAP)
- At the initial deployment of FAST VP, it may make sense to set the IAP to 168 hours (one week)
- During steady state, the IAP can be reduced to 24 hours (one day)
The FAST VP Relocation Rate (FRR)
- For the initial deployment of FAST VP, start with a more conservative value for the relocation rate, perhaps 7 or 8
- When it is seen that the amount of data movements between tiers is less, the FRR can be set to a more aggressive level of 5 or higher.
- To determine expected movement rate in GB/Hour and FRR, the formula is as follow <Rate> GB = 10GB/FRR
VP Allocation by FAST Policy (require 5876, off by default)
- As a best practice, it is recommended that VP Allocation by FAST Policy be enabled
Storage group priority
- This priority value can be between 1 and 3, with 1 being the highest priority. The default is 2
- The best-practice recommendation is to use the default priority of 2 for all storage groups associated with FAST VP policies
SRDF recommendations
- enable “VP Allocation by FAST Policy” for R2
- FAST VP SRDF coordination (Enabled by storage
groups @ R1, independent FAST VP Policy defined @ R2)
- Enable coordination only if R2 performance at the time of failover is critical to the client.
Gathering Data (FAST VP Configuration)
Execute a “symcfg discover” prior to the cmds below
symfast -sid xxxx list -control_parms – List FAST/VP configuration settings
symfast -sid xxxx list -state -vp – List current FAST/VP SW state
symtw -sid xxxx list – List fast data movement and performance monitor time windows
symfast list -sid xxxx -fp -v – List Fast/VP policy and storage groups associated to each policy.
symfast -sid xxxx list -association – List FAST VP Storage Group association and priority
symfast -sid xxxx list -association -demand – List FAST VP Storage Group Compliance reports, from a SG perspective
symfast -sid xxxx list -tech ALL -demand -vp -v – List FAST VP Compliance reports, from a drive Tech/tier perspective
symtier -sid xxxx list -vp – List VP tiers configuration and utilization summary
symtier -sid xxxx list -vp -v – List VP tiers configuration and utilization details with % full info
Gathering Data (Virtual Provisioning Data)
Execute a “symcfg discover” prior to the cmds below
symcfg -sid xxxx list -pool -thin -detail -gb – List VP thin Pool allocation summary in GB (usable/free/ % full)
symcfg -sid xxxx list -pool -thin -detail -gb -v – List VP thin Pool device allocation detail in GB
symcfg -sid xxxx show -pool <name> -thin –all -detail -gb – List a specific VP thin Pool device allocation detail in GB
symcfg -sid xxxx list -tdev -bound -detail -gb – List all tdevs (in GB) with VP pool binding associations
symcfg -sid xxxx list -tdev -pool <name> -gb – List all tdevs (in GB) bound to the specified VP pool
symdev -sid xxxx list -pinned – List any symdev that are pinned to a specific tier
symdev -sid xxxx list -datadev -tech <EFD | FC | SATA> – List all thin data devices associated with a disk technology
symdev -sid xxxx list -datadev -nonpooled – List all thin data devices not associated with a VP pool
symdisk -sid xxxx list -dskgrp_summary – Summary list of all disks defined in the frame by disk groups
symsg -sid xxxx list – List storage groups and if SG is associated with a FAST VP policy
Gathering Data (Symapi DB for Offline Review)
To look at the data offline, the array configuration information must 1st be sync with the symapi_db.bin file
Execute a “symcfg discover” prior to the cmds below
symcfg -sid xxxx sync -fast – Sync the symapi_db file with FAST info gathered from the frame
symcfg -sid xxxx sync -vpdata – Sync the symapi_db file with VP info gathered from the Frame
symcfg -sid xxxx sync -tier – Sync the symapi_db file with tier info gathered the frame
On UNIX, the default pathname for the configuration database file is:
/var/symapi/db/symapi_db.bin
On Windows, the default configuration database path is:
C:\Program Files\EMC\SYMAPI\db\symapi_db.bin
Set the following environment variables on the local symcli host to point to the symapi_db offline
SYMCLI_OFFLINE : 1
SYMCLI_DB_FILE : symapi_db.bin
Limits and Restrictions
- FAST VP Policies may contain up to 4 tiers (Up to 3 internal and 1 external <FTS> with 5876 code)
- 2. Internal tiers should be of different drive technology types (Each internal tier can consist of up to 4 VP pools)
- VMAX support up to 256 policies with 32 alpha-NUMERIC CHARACTERS (‘-’ AND ‘_’ are allowed)
- An array can contain up to a maximum of 8192 storage groups.
- Each storage group can contain a maximum of 4096 devices.
- Up to 1000 storage groups can be associated with FAST VP policies
- Cascaded storage groups require 5876 code and SE 7.5 or greater
- Up to 32 child storage groups can be added to 1 parent storage group
- FAST VP only supports association with storage groups that contain devices