Exploring an IBM v7000 Storage Engine
Within the storage landscape today we have a multitude of products available to solve the never ending storage challenge. Every storage product has it’s own set of features and characteristics which deliver certain elements of value to storage architects like myself. Currently an opportunity surfaced in which an IBM v7000 storage engine was available for me to review. So with the IBM v7000 in hand I started a process to evaluate what features and characteristics this product hosted to create some interesting solutions to some common storage challenges. Before breaking into the main content of this blog I would like to state what this blog isn’t about. This is not a bench mark of everything the v7000 can do. Good performance is very subjective and with enough resource and tuning we can solve any load requirement with any product. As well it’s is not a feature comparison exercise with other products in the same class. What I’m looking for is value in the product and how we can apply the value to solve storage problems.
If your not familiar with the v7000 it does not take long to understand and navigate it, the management web based GUI is logically strait forward. It is beneficial to understand how the v7000 presents storage when working in the GUI. Historically the v7000 is a version of the original IBM SAN Volume Controller or more commonly known as the IBM SVC with an updated set of management interfaces and some new useful features. The primary concept of the SVC is to provide a virtually abstracted instance of any block storage architecture the SVC can interface with. The design uses a model to which we define units of managed disk or MDisks. MDisks are block level devices or LU’s such as a single disk or an array of disks which can be located internally or externally from the SVC’s clustered engines. MDisks can be placed into tiered storage resource pools or directly served as a block image. As the SVC virtually re-provisions it’s storage attachment’s to initiators it creates multi-pathed nv-cache accelerated active/active connections for it’s storage consumers even if the original MDisk lacks this function. This definitely adds value as a tool to solve storage problems, especially where you may need to migrate or restructure an active SAN environment.
The setup phase of this evaluation proved to be very easy, the v7000 ships with a USB Flash Drive which was pre-loaded with a simple Windows based initialization tool. This simple tool allows anyone to quickly reach the browser management subsystem in less than 5 minutes. Once the system is configured the v7000 only accepts a small set of functions from the tool that require physical access such as resetting the superuser password. This limits any serious security events like connecting the USB Flash Drive to an incorrect host and disruptively changing it’s interface configuration.
Here are some screen shots of what brought me to the GUI in less than 5 minutes.
For myself this easy to use tool was actually not the significant element of value with the initial system configuration. Under the covers of the graphical tool we can see the v7000 provides the ability to configure a system from a text file when it’s brought up from the default factory state. A subset of system command line interface functions are available via the USB based configuration satask.txt file. The InitTool itself generates the subset or you can create your own specific commands once you know the appropriate statements. This means we can create an automated configuration process with it. The reason I find this useful stems more from the context of a disaster recovery functional element and test lab provisioning. For example if the alternate environment hosted a ZFS replicated set of LUNs on a commodity server we could easily create a configuration driven from the command line scripts that will serve a cloned set of the LUNs served in image mode using the v7000 or its lower cost v3700 counterpart. This would allow us to cost effectively mirror the functionality of a source environment on demand while also allowing us to return back to a test lab or other previous state all in the same day. In other words we can create the agility to restructure what a set of VMware ESXi or other hypervisor clusters can view repetitively on demand.
In order to evaluate the performance behaviors of the v7000 I decided it needed to go through what I like to call a little storage server hell. It’s a work exercise were we push the server to perform a 100% random IO load at 50% read and write over 512 byte requests. In this case it is a test of what a specific client can drive under specific conditions. The client driving this load is a Windows 7 based VMware VM with 2 x 3.0Ghz Intel i7 vCPUs running IOMeter with 64 worker threads. The VMware hypervisor host is a vSphere 5.1 based ESXi instance running over a dual 2GB Fiber channel HBA. The loading VM resembles a very heavy storage consumer in respect to a virtual storage consumption context. In almost all of these evaluations we will be exercising them against 2 defined pools which are dp1 and dp2. dp1 is a pool hosted by the internally based storage resources of the v7000 evaluation unit, mdisk0 is mapped as a raid 5 array and mdisk4 is our mirrored SSD array. The SSD tier is not used in this specific test. We will also be exploring some externally configured FC attached storage which is defined to storage pool dp2 so more of that will follow later in this blog entry.
Let explore the test configuration and it’s results.
The specifics for the workload are as follows:
Storage Server Stress Scale (1 – 10) = 10
Workers(Threads) = 64
Disk Size in Sectors = 24,000,000
IO Size in Bytes= 512
Outstanding Requests per Worker = 1
Random Operation Percentage = 100
Read Operation Percentage = 50
Write Operation Percentage = 50
Fiber Channel Paths = 2
Fiber Channel Speed in GB = 2
Settling Time in Seconds = 240
Test 1 = Random IO Storage Provisioning Hell
Tool = IOMeter
Volume Name = svc1-san-vol0
Raid Mode = 5
Thick Block Map Mode = 0
Compression = 0
SSD Cache = 0
NV Ram Cache in GB = 8
As we can observe the virtual disk sector map size of 24M far exceeds the storage servers non volatile cache and this means the server will need to exercise random seeks for a significant portion of the requests. Let’s look at the results graphically.
Obviously the v7000 system deals with the punishment very efficiently. Specifically we can see the latency is very low even under significant stress. This demonstrates the system performance excels even under a most demanding completely random work loads. The v7000 design maturity is evident in this load test. I’m sure we could push it further than this but we must keep the test results in context as the evaluation system only hosts an 8 drive 10k SAS raid 5 array. The constraint is an effective way to observe how well the SAN Volume Controller cluster software performs.
Just for brevity I collapsed the 24M virtual disk down to 1M of assigned sectors in a effort to observe what the storage cache based IO response would present under a 100% random 50 % read/write IO load.
Well it was very apparent that the Windows 7 VM is the limiting factor here, however it still demonstrates the value in the engineering. The latency is zero even with 23500 random IOPS hitting the storage cluster.
I found the compression feature to be effective for almost any type of VM’s IO load. There are some basic rules of engagement one should follow when using compression on the v7000. The first rule surrounds the load type, specifically be wary of very high intensity random write loads. This is not because the v7000 will not perform well for this load, it’s actually rooted around the systems CPU load factor when other CPU demand factors are present. You do not want to push the normal running CPU load over 60% for a sustained period as it will increase the possibility of creating excessive peak loading events. The second rule addresses the desire to engage in easy tier functionality. The issue becomes one where compression will not predicate a proper heat map pattern since writing compressed data is always a pattern shifting scenario and thus will not remain at the last heat map hit point. You can still drive a compressed volume but you would need to move the entire LUN to the SSD tier to be fully effective.
In order to gauge how well the v7000’s compression algorithm responds I chose to drive the engine with a typical Windows random 4k 70% read, 30% write load. Let’s observe the graphical result of a sustained 4 minute run.
The specifics for the workload are as follows:
Storage Server Stress Scale (1 – 10) = 6
Workers(Threads) = 32
Disk Size in Sectors = 24,000,000
IO Size in Bytes= 4096
Outstanding Requests per Worker = 1
Random Operation Percentage = 100
Read Operation Percentage = 70
Write Operation Percentage = 30
Fiber Channel Paths = 2
Fiber Channel Speed in GB = 2
Settling Time in Seconds = 240
Test 2 = Typical Random IO Compressed Storage Provisioning
Tool = IOMeter
Volume Name = svc1-san-vol0
Raid Mode = 5
Thick Block Map Mode = 0
Compression = 1
SSD Cache = 0
NV Ram Cache in GB = 8
The results are very interesting, we can observe a substantial drop is the SAS backplane IOPS in the interface metrics section. This is a excellent result as it will reduce the mdisk load and thus increase the data throughput. Another important element is the greatly reduced IO load at the mdisk layer. The input side volume is receiving a total of 2516 IOPS while the output side only requires 1452 IOPS. This is certainly a valued performance enhancing behavior when employing the compression feature. The final element I see as noteworthy is the very low latency result at the provisioned volume in which it never exceeds 3ms during the entire load run.
As a bonus we only consumed 17% of the CPU and gained the following compression capacity gain:
Even with a completely random data foot print workload function for this performance behavior test case, we gained 32% in capacity when using the compression feature and I’m quite happy with that result.
From the IOMeter client side the performance results do correlate and it is demonstrated with this screen shot.
Storage tiering is one of the more important elements the v7000 Storage Server can provision. All the marketing noise for this product emphasize that it’s easy to use and I would concur it was very easy to use and it works without any effort. The v7000 provisions tiering by granting the storage administrator the ability to define performance classes of mdisk arrays within a pool. IBM engineers make use of IO activity heat maps to determine which block extents within a defined volume should be migrated to a higher performance tier. You do have control of the initial size of the extents when you create the pool itself. Once created you cannot change the extent size and nor should. The default extent size is 256K which I did do a series of performance checks on and the IBM engineers have chosen a very good default. 256K fits the general use VM provisioning most suitably with the best performance over a range from 32K to 1MB. The v7000 engineers chose a 24Hr cycle of activity time within the heat map data to determine which extents should move to a higher tier and I agree with this methodology. Many dialogs about the subject of using shorter sampling time algorithms do circulate the web. I find that if the algorithm is too short or the extent size is too small the results are not favorable. When we move data to quickly we begin to thrash it around and this is not efficient. To much movement generates fragmentation, as well it uses too much backplane bandwidth and other systems resources resources like cache unnecessarily. It also does not allow the system an opportunity to move the extent when the system is most idle.
To observe the benefits of running tiered on the v7000 I chose to perform a before and after workload run using the same typical Windows 4k random IO with 70% read and 30% write. The specifics of the run are as follows and we will observe the results at the IOMeter client side.
Storage Server Stress Scale (1 – 10) = 6
Workers(Threads) = 32
Disk Size in Sectors = 24,000,000
IO Size in Bytes= 4096
Outstanding Requests per Worker = 1
Random Operation Percentage = 100
Read Operation Percentage = 70
Write Operation Percentage = 30
Fiber Channel Paths = 2
Fiber Channel Speed in GB = 2
Settling Time in Seconds = 240
Test 3 = Typical Random IO Tiered Storage Provisioning
Tool = IOMeter
Volume Name = svc1-san-vol0
Raid Mode = 5
Thick Block Map Mode = 0
Compression = 0
SSD Cache = 1
NV Ram Cache in GB = 8
And after 24 hours the same test parameters yielded the following result.
Obviously the result demonstrates significant IOPS performance gains. In this test the first workload run executed for 4 minutes and was then left idle for a period of 24 hours. Subsequently the second run was performed for the same 4 minute length. Within the IOMeter results I did find it very interesting that the throughput gain was quite remarkable. I was not expecting to see such a significant increase in the Total MB/s value. It’s actually 22 times greater than the original run. I did have to run it a second time just to verify that it was not an anomaly in the original test run. After tearing down the volume, recreating it, rerunning the workload and waiting the required 24 hours it again presented the same result. It’s something I will have to investigate further as the reason eludes me for the moment. None the less the numbers speak for themselves.
One of the most important features that the v7000 hosts for myself is the ability to virtualize external storage systems that are presented via fiber channel protocols. The reason I find value in this feature is that it grants the ability to move significant amounts of storage around without major impact to the primary external storage consumer. In addition to the migration capability one can also front end an external storage host and synchronously mirror the data to a second external storage host.
The v7000 officially supports a significant number of FC based external storage systems. Personally I wanted to investigate if the v7000 could handle an open source based product such as OpenSolaris which is now formally any Illumos based engine. There is a synergy that can be gained within the world of the svc and the commodity open source world. With that idea in mind I built the required elements and did some very interesting tests of provisioning up some OpenIndiana FC based LUNs to the v7000.
Lets walk through some of the build elements.
The source storage host hardware was some off the shelf white box commodity components as follows:
1 – Antec Case
1 – LSI SAS3442 Adapter
1 – QL2462 Dual FC Adapter
8Gb – DRAM
1 – Intel i7 930 CPU
4 – Seagate NL SAS ST32000645SS
1 – USB Flash
1 – X58 Gigatech Mobo
The USB Flash Drive was loaded with an OpenIndiana USB based install using version oi_151a7.
The basic OpenIndiana storage configuration elements are as follows:
~# zpool create -f sp1 raidz1 c7t13d0 c7t14d0 c7t15d0 c7t16d0
(4 Disk Raidz1 array)
~# update_drv -a -i ‘”pciex1077,2432″‘ qlt
(FC Target Mode Driver Binding For COMSTAR)
~# zfs create -b 64K -s -V 256G sp1/zfs1-san-vol1
~# zfs create -b 64K -s -V 256G sp1/zfs1-san-vol2
(Some ZFS Posix Block Devices)
stmfadm create-lu /dev/zvol/rdsk/sp1/zfs1-san-vol1
stmfadm create-lu /dev/zvol/rdsk/sp1/zfs1-san-vol2
(Some COMSTAR exposed LUNs)
~# stmfadm create-hg svc1
~# stmfadm add-view -n 0 -h svc1 600144F0F5644400000050BD35750001
~# stmfadm add-view -n 1 -h svc1 600144F0F5644400000050BD35750002
(Some COMSTAR host groups and views to the LUNs now assigned with a GUID)
~# stmfadm add-hg-member -g svc1 wwn.500507680220146B
~# stmfadm add-hg-member -g svc1 wwn.500507680210146B
(Add the v7000 to the COMSTAR svc1 host group)
~# zfs create -s -b 8K -V 32G sp1/zfs1-san-vol3
~# stmfadm create-hg esx1
~# stmfadm add-hg-member -g esx1 wwn.210000e08b83cef2
~# stmfadm create-lu /dev/zvol/rdsk/sp1/zfs1-san-vol3
~# stmfadm add-view -n 4 -h esx1 600144F0F5644400000050CD19240001
(Create a volume to test the v7000 image mode)
During the initial testing I found that the v7000 does indeed successfully connect to the open source based OpenIndiana storage host and the LUNs are identified as generic targets. After discovering the LUNs they were added to the dp2 pool. As a comparative I chose to perform the typical Windows 4k 70/30 workload run on a newly created volume from the v7000.
Lets observe the metrics presented on the v7000 performance console.
The performance is impressive for a 4 disk array mdisk presentation. There is an interesting v7000 caching effect revealed at the mdisks metric panel where we can observe the write load is only 420 IOPS verses the virtual volume IOPS write rate of 1650. This is definitely a beneficial impact of the non-volitile cache in the v7000 cluster. We can also see that the disk latency at the external side is considerably higher for write operations than that of the virtual volume layer. As well we can see the external storage host is also optimizing the operations demonstrated by the gradual increase in FC operations on the Interface metrics panel. I’m very pleased to see that the v7000 can successfully serve an open source based storage target and that there are valuable optimizations gained from this configuration.
One element I was very interested in exploring was the image mode feature of the v7000 which gives us the ability to present a volume in passthrough mode. In other words the v7000 acts as the target presenting the external storage content as a block for block image. The same caching benefits observed in the above test are also presented when using the image mode. In this next test we will first present some storage from the external OpenIndiana host to a VMware ESXi 5.1 hypervisor and create a VMFS volume with it. We will then place the IOMeter client VM on the volume and run a load test using the Windows 4k 70/30 run. Then we will shut the VM down, remove the volume from the ESXi host and present the OpenIndiana LUN to the v7000 for import. Once imported into the v7000 in image mode we will present and add the LUN back to the ESXi host. Finally we will rescan the FC adapter for VMFS volumes and observe the result.
Lets walk through the operation graphically.
TZVM running on OpenIndiana ZFS VMFS volume named zfs-san-vol3.
Observing the VMware presented Fiber Channel paths for zfs1-san-vol3. Note the policy storage array type.
The pre-image mode migration IOMeter results are now presented for a 4 Min run. This is a 4K random 70/30 read write mix. At this point we need to shutdown the VM and we will also remove the zfs1-san-vol3 datastore from the ESXi host prior to re-introducing the same volume over the v7000 svc engine. We simply remove the ESXi FC initiator member definition from the COMSTAR esx1 group and this will prevent any connectivity of the original datastore instance. We do this to prevent any VMware snapshot detection issues. At the same time we will add the zfs1-san-vol3 LUN view to the COMSTAR svc1 group.
~# stmfadm remove-hg-member -g esx1 wwn.210000e08b83cef2
~# stmfadm add-view -n 4 -h svc1 600144F0F5644400000050CD19240001
At this point we have run the v7000 mdisk detection and imported the newly discovery mdisk7 LUN which is the zfs1-san-vol3 datastore. We do not add the image to a pool.
We can now proceed to add the svc image of zfs1-san-vlo3 back to the ESXi host and we can observe its now exposed as an IBM Fiber Channel presentation on LUN4.
When the datastore is added VMware does notice the naa value is different and it needs to confirm that we do want the current datastore volume to be mounted with the same signature as before. This is a typical response for an changed naa. If this was not the correct LUN for this signature accepting this naa would introduce instability to this VMware VMFS clustered datastore on all other ESXi hosts.
VMware does indeed identify the external v7000 image mode presentation and we can observe zfs1-san-vol3 is completely intact.
We can observe the newly defined paths and take note of the policy mode storage array type as its now in SVC mode. We also now have 4 paths available as well.
With the ESXi now up and running with the re-established zfs1-san-vol3 datastore in image mode over the v7000 we can now run the ran 4k random 70/30 read write mix. We can obverse an immediate gain of 500 IOPS which is the write load hitting the v700 nv-cache 4 minutes into the test and we can see the synergy working. I let the test run for an addition 6 minutes for a total run of 10 minutes to observe the full cache benefit of the v7000 as the storage virtualization head.
Obviously we can see the benefit of the ZFS arc cache and v7000 nv-cache working together to improve our system latency and IOPS flow. The information presented in exploration does demonstrate that the v7000 brings values in many unique attributes and specifically drives a high degree of agility within the storage solutions scope.
Well this brings a close to this blog entry and I must say the results were very interesting and enlightening.
I hope you enjoyed the post.
Regards,
Mike
Site Contents: © 2013 Mike La Spina
Running VMware’s vCSA on MSSQL
I really like the new vCenter Appliance, but I am not a fan of either the embedded DB2,DB2 or Oracle database options. It seems unusual that VMware did not support MSSQL on the first release of the appliance. I suspect they have their reasons but I prefer not to wait for it when I know it runs without issue in my lab. If you’re interested in the details read on and discover how I hacked it into submission.
The vCSA Linux host contains almost all the necessary components to drive an MSSQL DB. The missing elements are an MSSQL ODBC and JDBC driver. They are both available from Microsoft and can be installed on the appliance. Now as you should know VMware would not support such a hack and I’m not suggesting you run it in your world. For me it’s more about the challenge and adventure of it. Besides I don’t expect VMware to support it nor do I need the support.
Outside of these two Microsoft products it is necessary to modify some of the VMware property files and bash code to allow for the mssql drivers.
The appliance hosts 3 major application components. A web front end using lightpd, a vpxd engine which appears to be coded in C and a Tomcat instance. Surrounding these elements we have configuration scripts and files that provide end users an easy way to setup the appliance. The first area to address for MSSQL connectivity surrounds Microsofts 1.0 ODBC driver for Linux. It can be directly downloaded and installed on the vCSA using curl.
Enter YES to accept the license or anything else to terminate the installation: YES Checking for 64 bit Linux compatible OS ..................................... OK Checking required libs are installed ................................. NOT FOUND unixODBC utilities (odbc_config and odbcinst) installed ............ NOT CHECKED unixODBC Driver Manager version 2.3.0 installed .................... NOT CHECKED unixODBC Driver Manager configuration correct ...................... NOT CHECKED Microsoft SQL Server ODBC Driver V1.0 for Linux already installed .. NOT CHECKED Microsoft SQL Server ODBC Driver V1.0 for Linux files copied ................ OK Symbolic links for bcp and sqlcmd created ................................... OK Microsoft SQL Server ODBC Driver V1.0 for Linux registered ........... INSTALLED
You will find the install places the ODBC driver in /opt/microsoft
We need to edit the appliance odbcinst template file to include the newly added driver.
vcsa1:/ # vi /etc/vmware-vpx/odbcinst.ini.tpl
We need to append the following ODBC driver entry:
[MSSQL] Description = Microsoft ODBC driver for SQL v11 Driver = /opt/microsoft/sqlncli/lib64/libsqlncli-11.0.so.1790.0 UsageCount = 1 Threading = 1
The Microsoft driver will expect to have Openssl 1.0 available. It’s not installed on the appliance and I don’t feel it’s necessary either. We can just point to the installed 0.9.8 code and it will have no issues. Some symbolic links are all we need to get things rolling as shown here.
vcsa1:/tmp # ln -s /usr/lib64/libcrypto.so.0.9.8 /usr/lib64/libcrypto.so.10 vcsa1:/tmp # ln -s /usr/lib64/libssl.so.0.9.8 /usr/lib64/libssl.so.10
Tomcat as well needs to access the MSSQL server which requires a Microsoft JDBC driver and can be as well downloaded with curl.
vcsa1:/ # cd /tmp
vcsa1:/tmp # curl http://download.microsoft.com/download/0/2/A/02AAE597-3865-456C-AE7F-613F99F850A8/sqljdbc_4.0.2206.100_enu.tar.gz -o sqljdbc_4.0.2206.100_enu.tar.gz
vcsa1:/tmp # tar -xvf sqljdbc_4.0.2206.100_enu.tar.gz
vcsa1:/tmp # cp sqljdbc_4.0/enu/sqljdbc4.jar /usr/lib/vmware-vpx/common-jars/
I suspect that the JDBC driver is used within the Tomcat application to collect status info from ESX agents, but don’t hold me to that guess.
Once we have our MSSQL drivers in place we need to focus on hacking the config files and shell scripts. Let’s start with the web front end first.
Within /opt/vmware/share/htdocs/service/virtualcenter we find the appliance service configuration scripts and other various files. We need to edit the following files.
layout.xml – Database action fields field.properties – Database type field list values
We need to add the mssql DBType values to give us the option from the database configuration menu and to enable the action.
Layout needs the following segment replaced.
<changeHandlers> <!-- actions can be enable,disable,clear --> <onChange id="database.vc.type"> <field id="database.vc.server"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> <field id="database.vc.port"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> <field id="database.vc.instance"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> <field id="database.vc.login"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> <field id="database.vc.password"> <if value="embedded" actions="disable,clear"/> <if value="UNCONFIGURED" actions="disable,clear"/> <if value="db2" actions="enable"/> <if value="oracle" actions="enable"/> <if value="mssql" actions="enable"/> </field> </onChange> </changeHandlers>
Field.properties needs the following edit where we are adding mssql to the assignment statement.
database.type.vc.values = UNCONFIGURED;embedded;oracle;mssql
Once we have the web front end elements populated with the new values we can focus on the bash shell script. The scripts are located in /usr/sbin. We need to work the following script.
vpxd_servicecfg – This script needs the following subroutines replaced with one that formats the database connection string for mssql. There are two areas which need modification, do_db_test and do_db_write. The test section needs to accept mssql as a valid DBType and will, based on the DBType make a connection using a series of input parms like the server address user and instance. The cfg write routine needs to also detect the mssql DBType and do a custom mod for the db connection url. These calls depend on a proper mssql odbc driver configuration.
############################### # # Test DB configuration # do_db_test() { DB_TYPE=$1 DB_SERVER=$2 DB_PORT=$3 DB_INSTANCE=$4 DB_USER=$5 DB_PASSWORD=$6 log "Testing DB. Type ($DB_TYPE) Server ($DB_SERVER) Port ($DB_PORT) Instance ($DB_INSTANCE) User ($DB_USER)" case "$DB_TYPE" in "mssql" ) log "DB Type is MSSQL" ;; "oracle" ) ;; "embedded" ) set_embedded_db ;; *) log "ERROR: Invalid DB TYPE ($DB_TYPE)" RESULT=$ERROR_DB_INVALID_TYPE return 1 ;; esac if [[ -z "$DB_SERVER" ]]; then log "ERROR: DB Server was not specified" RESULT=$ERROR_DB_SERVER_NOT_FOUND return 1 fi ping_host "$DB_SERVER" if [[ $? -ne 0 ]]; then log "ERROR: Failed to ping DB server: " "$DB_SERVER" RESULT=$ERROR_DB_SERVER_NOT_FOUND return 1 fi # Check for spaces DB_PORT=`$SED 's/^ *$/0/' <<< $DB_PORT` # check for non-digits if [[ ! "$DB_PORT" =~ ^[0-9]+$ ]]; then log "Error: Invalid database port: " $DB_PORT RESULT=$ERROR_DB_SERVER_PORT_INVALID return 1 fi if [[ -z "$DB_PORT" || "$DB_PORT" == "0" ]]; then # Set port to default case "$DB_TYPE" in "db2") DB_PORT="50000" ;; "oracle") DB_PORT="1521" ;; *) DB_PORT="-1" ;; esac fi #Check whether numeric typeset -i xport xport=$(($DB_PORT+0)) if [ $xport -eq 0 ]; then log "Error: Invalid database port: " $DB_PORT RESULT=$ERROR_DB_SERVER_PORT_INVALID return 1 fi #Check whether within valid range if [[ $xport -lt 1 || $xport -gt 65535 ]]; then log "Error: Invalid database port: " $DB_PORT RESULT=$ERROR_DB_SERVER_PORT_INVALID return 1 fi if [[ -z "$DB_INSTANCE" ]]; then log "ERROR: DB instance was not specified" RESULT=$ERROR_DB_INSTANCE_NOT_FOUND return 1 fi if [[ -z "$DB_USER" ]]; then log "ERROR: DB user was not specified" RESULT=$ERROR_DB_CREDENTIALS_INVALID return 1 fi if [[ -z "$DB_PASSWORD" ]]; then log "ERROR: DB password was not specified" RESULT=$ERROR_DB_CREDENTIALS_INVALID return 1 fi if [ `date +%s` -lt `cat /etc/vmware-vpx/install.time` ]; then log "ERROR: Wrong system time" RESULT=$ERROR_DB_WRONG_TIME return 1 fi return 0 } ############################### # # Write DB configuration # do_db_write() { DB_TYPE=$1 DB_SERVER=$2 DB_PORT=$3 DB_INSTANCE=$4 DB_USER=$5 DB_PASSWORD=$6 case "$DB_TYPE" in "embedded" ) set_embedded_db_autostart on &>/dev/null start_embedded_db &>/dev/null if [[ $? -ne 0 ]]; then log "ERROR: Failed to start embedded DB" fi ;; * ) set_embedded_db_autostart off &>/dev/null stop_embedded_db &>/dev/null ;; esac set_embedded_db ESCAPED_DB_INSTANCE=$(escape_for_sed $DB_INSTANCE) ESCAPED_DB_TYPE=$(escape_for_sed $DB_TYPE) ESCAPED_DB_USER=$(escape_for_sed $DB_USER) # these may be changed below ESCAPED_DB_SERVER=$(escape_for_sed $DB_SERVER) ESCAPED_DB_PORT=$(escape_for_sed $DB_PORT) case "$DB_TYPE" in "db2") # Set port to default if its set to 0 if [[ "$DB_PORT" -eq 0 ]]; then DB_PORT=50000 ESCAPED_DB_PORT=$(escape_for_sed $DB_PORT) fi DRIVER_NAME="" URL="" FILE=`$MKTEMP` $CP $DB2CLI_INI_OUT $FILE 1>/dev/null 2>&1 DB_FILES[${#DB_FILES[*]}]="$DB2CLI_INI_OUT $FILE" # Store file tuple $SED -e "s!$TNS_SERVICE_SED_STRING!$ESCAPED_DB_INSTANCE!" -e "s!$SERVER_NAME_SED_STRING!$ESCAPED_DB_SERVER!" -e "s!$SERVER_PORT_SED_STRING!$ESCAPED_DB_PORT!" -e "s!$USER_ID_SED_STRING!$ESCAPED_DB_USER!" $DB2CLI_INI_IN > $DB2CLI_INI_OUT ;; "oracle") # Add [ ] around IPv6 addresses echo "$DB_SERVER" | grep -q '^[^[].*:' && DB_SERVER='['"$DB_SERVER"']' ;; "mssql" ) TNS_SERVICE=$DB_INSTANCE # Set port to default if its set to 0 if [[ "$DB_PORT" -eq 0 ]]; then DB_PORT=1433 ESCAPED_DB_PORT=$(escape_for_sed $DB_PORT) fi ;; esac if [[ "$DB_PORT" -eq 0 ]]; then DB_PORT=`get_default_db_port $DB_TYPE` fi # Save the original ODBC and DB configuration files FILE=`$MKTEMP` $CP $ODBC_INI_OUT $FILE 1>/dev/null 2>&1 DB_FILES[${#DB_FILES[*]}]="$ODBC_INI_OUT $FILE" # Store filename FILE=`$MKTEMP` $CP $ODBCINST_INI_OUT $FILE 1>/dev/null 2>&1 DB_FILES[${#DB_FILES[*]}]="$ODBCINST_INI_OUT $FILE" # Store filename # update the values ESCAPED_DB_SERVER=$(escape_for_sed $DB_SERVER) ESCAPED_DB_PORT=$(escape_for_sed $DB_PORT) # Create new configuration files $SED -e "s!$DB_TYPE_SED_STRING!$ESCAPED_DB_TYPE!" -e "s!$TNS_SERVICE_SED_STRING!$ESCAPED_DB_INSTANCE!" -e "s!$SERVER_NAME_SED_STRING!$ESCAPED_DB_SERVER!" -e "s!$SERVER_PORT_SED_STRING!$ESCAPED_DB_PORT!" -e "s!$USER_ID_SED_STRING!$ESCAPED_DB_USER!" $ODBC_INI_IN > $ODBC_INI_OUT $CP $ODBCINST_INI_IN $ODBCINST_INI_OUT 1>/dev/null 2>&1 do_jdbc_write "$DB_TYPE" "$DB_SERVER" "$DB_PORT" "$DB_INSTANCE" "$DB_USER" "$DB_PASSWORD" return 0 }
At this point the appliance MUST be restarted to work correctly.
With the hacks applied, our appliance is now capable of driving an MSSQL database. On the MSSQL server side you need to have the database created and named VCDB. You will also require an SQL user named vc which needs to be initially set as a sysadmin and once the database is initialized you can downgrade it as a dbo of only the VCDB.
The steps to add your database to the appliance are very easy and here are some screen shots of the web console database config panel to demonstrate this ease of implementation.
If your interested in trying it out I have included the files for release 5.0.0-455964 here.
/etc/vmware-vpx/odbcinst.ini.tpl
/opt/vmware/share/htdocs/service/virtualcenter/layout.xml
/opt/vmware/share/htdocs/service/virtualcenter/fields.properties
I have found no issue to date in my lab after 15 days, this does not mean it’s issue free and I would advise anyone to use caution. This was not tested with heavy loads.
Well I hope you found this blog entry to be interesting and possibly useful.
Regards,
Mike
Site Contents: © 2012 Mike La Spina
Updated ZFS Replication and Snapshot Rollup Script
Thanks to the efforts of Ryan Kernan we have an updated ZFS replication and snapshot rollup script. Ryan’s OpenIdiana/Solaris/Illumos community contribution improves the script to allow for a more dynamic source to target pool replication and changes the shapshot retention method to a specific number of snapshots rather than a Grandfather Father Son method.
Regards,
Mike
Site Contents: © 2011 Mike La Spina
OpenSolaris Door Closes
Months of silence from Oracle on any official statement about OpenSolaris support and development have passed. With that inaction the OpenSolaris Governing Board has motioned to disband and has passed this motion. This now leaves Oracle alone with its not so open “Open Source” operating system. Unfortunately this inaction does not follow Larry Ellison’s public statement indicating that OpenSolaris support would continue. In my view closing the source code until Oracle develops a new release of Solaris is certainly not what I consider an Open Source effort. Granted that most of the development of OpenSolaris code was performed by SUN Engineers, this would be expected as the Solaris OS source was only exposed for 5 years and this really is a short window of time for other developers to familiarize with the code. Surely Larry as a intelligent man knows that this will result in the abandonment of OpenSolaris by the 40,000+ users that were actively exploring it. The only viable alternative will now be held by Nexenta in Illumos. In the background IRC chatter we have seen leaked email evidence that Oracle wishes to keep Solaris closed and will only release the OpenSolaris source when they have greatly distanced the public from it’s features. Not a good move for the Open Source world but that’s the way this bit of history is unfolding.
I’m looking forward to working with Illumos, hopefully you are too.
Maybe Mr. Ellison could do some rethinking about what legacy he would like to leave the world.
Regards,
Mike
Site Contents: © 2010 Mike La Spina
The Illumos Project Launches
If you use or are interested in OpenSolaris then you should check out the Illumos Project which was announced today by Garrett D’Amore of Nexenta. It’s an excellent development project which initially is working toward delivering a compatible, fully open sourced version of the closed OpenSolaris binaries. At first I thought this was going to be a pure fork of OpenSolaris, however its not really a fork. The Illumos project t maintains close compatibility and functionally with it parent OpenSolaris code stream while granting more innovative development freedom and full community control. All good things in my books.
http://www.illumos.org/projects/site/wiki/Announcement
Regards,
Mike
Site Contents: © 2010 Mike La Spina