• RELEVANCY SCORE 4.76

    DB:4.76:Perform/Pred In Cluster/San Environment? ap






    Does anyone has experience with Perform/Predict in a Clustering and SAN environment???I have a customer who runs compaq Tru64 Unix with Tru64 Cluster and have attached SAN to their system. Can Perform/Predict make models for these stuff accurately?Welcome any comments...

    DB:4.76:Perform/Pred In Cluster/San Environment? ap


    Virtual package, is a virtual IP address that transfers from "failed to" systems, it also contains all system/ disk information for failover.My problem has been perform predict does a kernel call to get the host info, which is related to the primary IP of the system, so it will not follow a virtual IP.

  • RELEVANCY SCORE 2.99

    DB:2.99:Migrate Exchange Database From Nas To San In Cluster Environment 1d





    Hello,
    We are using MS-Exchange 2007 Ent. and configured two server's in cluster (Exchange Database Cluster), the database are configured and stored on NAS (Network Attached Storage) and we have purchased new SAN (Storage Attached Network) Device, We want to migrate
    this cluster from NAS Device to SAN Device.
    I would like to know what is the bestpracticeto perform this task and what precaution we need to keep in mind before starting this activty.
    Your help is highly appreciated...
    Regards,
    Arjun V.
    Arjun V.

    DB:2.99:Migrate Exchange Database From Nas To San In Cluster Environment 1d

    I would share/publish the new storage to the servers and configure the drives/storage and then create new databases and move mailboxes
    It's just a move-mailbox cmd you need to perform then
    And move the cluster resource

    Go for the alternative that you feel most comfortable with
    Jonas Andersson | Microsoft Community Contributor Award 2011 | MCITP: EMA 2007/2010 | Blog:
    http://www.testlabs.se/blog | Follow me on twitter:
    jonand82

  • RELEVANCY SCORE 2.86

    DB:2.86:Joining A Production Hyper-V To Existing Cluster mf





    Hello,
    I currently have no possibility to perform a lab test on this.
    We have an existing 2008 R2 sp1 Hyper-V cluster that is working just fine with CSV etc. The thing is that we have a hyper-v host (serverC) not joined to a cluster but is hosting many VMs.
    Our SAN does not support SAN transfers (until SAN IQ 10 possibly later on) and we absolutley want to avoid network copy migrations of the VMs.
    Would it be possible to join the existing hyper-v host to the existing cluster and connect it to the CSV and perform live migrations? Will SCVMM accept that non-HA VMs exist on a server that has joined the cluster. Should we expect other issues?
    In the long run we will not keep serverC in this cluster.
    Best regards
    ErikErik, with many, many certs! ;-)

    DB:2.86:Joining A Production Hyper-V To Existing Cluster mf

    Vincent, yes that is correct. We have 1:1 in terms of VM per LUN. LUN 2 have a path to a Volume Mount Point.
    Understand perfectly how we need to handle the LUNs in term of giving all nodes access to LUNs without mounting them. Are you saying that adding LUN 2 as a disk in Failover Manager and then bring that into the CSV. Shut down the VM, change the path (from
    the Volume Moint Point to the CSV path), start the VMand then Voil?Erik, with many, many certs! ;-)

  • RELEVANCY SCORE 2.75

    DB:2.75:Is Ms Cluster Supported On Iscsi San? Maximum No Of Nodes? sp


    ?:| Hi Guys

    I have a windows cluster (file server) 2 nodes connected to DAS (PV220s) RAID5. 11 disks. NTFS.

    I want to move the cluster in Virtual environment and also the data from DAS to SAN.

    Is this supported? (MS cluster in Iscsi SAN, how many max nodes?)

    If you have any thoughs please let me knw

    One of the guys here suggested moving data first (extra NIC in the server - ISCSI connection SAN)

    how shall i go about moving data from DAS (NTFS) to iscsi SAN (VMFS), do i have to create NTFS share in the free space for SAN.

    How will virtual machines acess the data. (VMFS or NTFS)

    please shed some light and enligten me

    DB:2.75:Is Ms Cluster Supported On Iscsi San? Maximum No Of Nodes? sp


    I was the "guy" I think.

    What I was thinking was that you could add ISCSI LUNS to existing physical cluster (They would be formated NTFS). The robocopy the data over. Once that was complete you could setup a cluster on your ESX environment and then have the new cluster attach to the existing ISCSI LUNS via RDMs (http://vmzare.wordpress.com/2007/02/19/vmware-raw-device-mappingrdm/).

    There would be some down time for the switch but this is the simplest method and to my mind the one that offers the best regression at a number of steps along the way.

    MS Cluster in box (two nodes Single ESX host) or across boxes (Two ESX hosts one node on each) is supported. Two nodes are supported but it is possible (I think) with RDMs to have as many as 8, although this would not be a supported configuration.

  • RELEVANCY SCORE 2.73

    DB:2.73:Vmfs Change In Esx3.0.2 To Esx3.5 8p


    Hello Guyz,

    am planning to upgrade my ESX3.0 environment to ESX3.5 environment. I dont want to go for upgrade option in ESX though i can do it.

    I'm planning to Vmotion the VMs to other hosts in cluster and doing a fresh installation of ESX3.5 in each host. After upgrading the host, i'll vmotion the vm from 3.0 to 3.5 will do the fresh installation in all the hosts. While doing the fresh installation i'll disconnect the fiber connection to SAN, as all the VMs reside in SAN. I'll connect back the SAN after the upgradation. Hope this works.

    My concern is - right now, the VMFS of the storage in ESX3.0.2 is VMFS 3.21. Do i need to change anything in filesystem?

    I do have ESX3.5 environment in my VC and displays the VMFS of the storage in ESX3.5 as VMFS 3.31.

    DB:2.73:Vmfs Change In Esx3.0.2 To Esx3.5 8p


    Check the last post in this discussion for additional info.

  • RELEVANCY SCORE 2.68

    DB:2.68:Drives In A Cluster Environment k9


    Hi,
     
    I have a SAN and configuring a cluster on SQL 2005. I initially created a Quorum drive when setting up the cluster and now added 4 more drives to the physical node but when I try to install SQL that drive cannot be located.
    Do we need to create all the drives when installing the cluster or what is the way to add the drives later on.
     
    Thanks
    Anup

    DB:2.68:Drives In A Cluster Environment k9

    No, one DTC per cluster. It is shared with everything else in the cluster.
    You will need separate dedicated disks and such for your other SQL instance(s) though.

  • RELEVANCY SCORE 2.68

    DB:2.68:Vmotion Issue fz


    Not sure if this is the most appropriate forum, but I have a vmotion issue. I've just installed a new environment and whenever I choose to migrate a VM to another host I get the message "The virtual machine must be powered off to perform this function". Why?

    I have two vSphere hosts managed by vCenter in a single cluster.

    Things I've checked are:

    - License : VMotion listed under "Add-on Features" on each host

    - "VMotion Enabled: yes" on the summary tab of both hosts

    - VMKernel Port Group created and VMotion enabled on both hosts

    - Can ping both hosts VMKernel IP address from Service Console on both hosts

    - Checked the VM to ensure that no devices are configured to use the host hardware (i.e. floppy DVD)

    - Verfified that Network labels on both hosts are the same

    - VM is hosted on common shared storage (FC SAN)

    Am running out of further ideas, so any suggestions gratefully received.

    Cheers, Nick.

    DB:2.68:Vmotion Issue fz

    OK, got to the bottom of the problem - an IP Address conflict on one of my VMKernel ports (someone's not been updating our IP address spreadsheet - I now need to hunt the culprits down and kill them ).

    I looked at the links that addressed the "The virtual machine must be powered off to perform this function" error and still couldn't see the solution until the penny dropped - that warning isn't warning me of anything at all! It's associated with the last option only - "Change both host and datastore" (or similar) - even though it looks like it's for the whole menu. That's terrible GUI design.

    Cheers, Nick.

  • RELEVANCY SCORE 2.67

    DB:2.67:Ms Exchnage Cluster Test Environment x8


    Hi,We are trying to implement  test environment for a 2 node MS
    Exchnage 2007 cluster. As it is only a test environment, we can not
    afford to buy a SAN/ ISCSI/ IPSAN to implement the cluster. In fact we
    are using two high configuration PCs as the two Exchange server nodesI
    would like to know if we can implement the Exchange 2007 cluster using
    a third PC or a network drive on a seperate machine in the LAN as the
    common storage (in place of a SAN box or ISCSI Box). If this is not
    possible, is there any other simple and cost effective way by which we
    can implement MS Exchnage 2007 cluster and test?Your kind feedback will be appreciated.Thanks regards,Srinivas

    DB:2.67:Ms Exchnage Cluster Test Environment x8

    Dear Richard,Thanks for the information. We really appreciate your response and suggestions.Thanks reg,Srinivas

  • RELEVANCY SCORE 2.67

    DB:2.67:Exchange 2007 Scc And Active Directory Recovery s9


    I have a development Windows Server 2008 Active Directory and Exchange
    2007 SCC environment.  This environment is contained in VMware ESX 3.5.
     In the process of testing DR scenarios, I restored a SAN-based
    snapshot of all my DCs and the Active/Passive nodes of the Exchange
    2007 SCC. Active Directory came back up fine (I was
    surprised, I as expecting USN Rollback).  I know that this is NOT a
    supported recovery method.   My Exchange 2007 SCC is reporting the following error (Event ID 2011):   An unexpected failure has occurred. No modules were loaded and the service will perform no work. Diagnostic information: The Windows Cluster service encountered an error during function OpenCluster:.
    I'm guessing that the cluster information in Active Directory is
    corrupt or missing.  I have not tried /RecoverCMS yet.  Also, the File
    Share resource for Exchange 2007 is missing in the Failover Cluster
    Management console and the icons look different as well.   Any ideas?

    DB:2.67:Exchange 2007 Scc And Active Directory Recovery s9

    You must like a challenge or something!  Why would you use snapshots on your domain controllers?  I'm afraid I wont be much help, but I thought I could point out that /recovercms will reinstall a box with CMS information that is stored within AD.  if the AD info is inconstant/corrupt this switch wont be able to fix that.Do you have a backup of AD?  Could you restore that and try re-deploying your server? Mike Crowley: MCT, MCSE, MCTS, MCITP: Enterprise Administrator / Messaging Administrator

  • RELEVANCY SCORE 2.66

    DB:2.66:Esx Host And Storage On A San cf


    We currently have 8 ESX servers hosting approximately 40 vm guests in a VM cluster. an IBM DS4400 SAN hosts the storage for this VM environment. We have roughly 10 luns configured as storage. Here is the issue we are seeing and would like to know if anyone has expereinced this. We have added a couple of new ESX hosts and migrated an existing host - of which rebuilidng was part of the process. Each time we reboot an ESX host (so far it has only been the newlybuilt or rebuilt host) or perform a port scan of the hba's our whole virtual environment will experience what I describe as a "hiccup" in terms of the environments access to the SAN. We will see a group of servers (unable to identify a pattern as to which group) that when accessing the console will show - no OS found. Reseting the guest resolves the issue. We will also find a number of guests with IO errors in the event logs - rebooting them clears that error. We do not see any issues with non-VM servers that access the SAN nor do we see any indication of any problem on the SAN - no errors reported in any logs etc....

    Any help I can get would be greatly appreciated - not sure, but suspect the SAN configuration for these hosts may be off or something else needing to be configured on ESX server - but unsure why it would affect the other ESX hosts...... Everything "looks" correct.....

    Jim

    DB:2.66:Esx Host And Storage On A San cf

    The only way I know of to reduce reservation issues (and this dedpond onthe san model) is to reduce the number of VM's per lun. On the heaviest VM's we give the hard hit data stor a dedicated LUN.

    We really are not seeing any reservation issues, and we use cheapo iSCS sans.

  • RELEVANCY SCORE 2.65

    DB:2.65:Vm Clustering In Hyper-V Failover Cluster x9


    Hi,
    Setup environment:
    2 Host is configured with Hyper-V Failover Cluster (Window 2008 R2). The server is connected to SAN usingFiberChannel.
    Now I would like to configure SQL 2008 Failover Cluster in this environment. So one VM will be on Host 1 and another VM will in Host 2.
    Can I configure my SQL Server Failover within Hyper-V Cluster environment. If yes, any guidance and if not any other suggestions.
    Thanks

    DB:2.65:Vm Clustering In Hyper-V Failover Cluster x9

    This link
    http://blogs.msdn.com/b/sqlserverfaq/archive/2011/02/25/sql-server-2008-clustering-using-hyper-v-video.aspxis not complete.
    In case you are still hanging on this let me know, I'll paste my PPTyup

  • RELEVANCY SCORE 2.65

    DB:2.65:Boot From San Installation In Cluster 7x


    Hi,

    I need to implement a cluster of 4 ESX 3.5 server in a SAN environment. All servers are going to be diskless, no need to tell that they are all going to be boot from SAN. My question is : how many host partitions am I going to need with that setup? As far as my knowledge goes, I assume that I am going to need one LUN per server for booting from SAN and one shared partition for all server for the clustered datastore. So, my count is at 5 partitions.

    I have been told that you can use the same shared partition for all my boot from san ESX server. Could it be possible? I really doubt it, but I just wanted some feedback on that.

    Hope this is clear and thank you.

    DB:2.65:Boot From San Installation In Cluster 7x


    You can share a MBR- but I would not do it - imagine if you made a mistake it would affect the ability on booting all nodes of the cluster - best practice is to have each host boot from its own LUN 0

  • RELEVANCY SCORE 2.65

    DB:2.65:Replacing Cluster Node Hardware dp


    Hi,
    I'm currently in the following situation:
    (windows)cluster named: Cluster1
    Currently: 2 nodes named nodeA and nodeB function as Cluster1 (SAN disks)
    Step 1: 2 nodes named nodeC and nodeD will be added to Cluster1 (new nodes: SAN disks unattached)
    Step 2: NodeC and nodeD will be provided with the same SAN disks as nodeA en nodeB
    Question: Do nodeC en nodeD understand that that they are NOT allowed to write to the SAN disks? Or will they claim the SAN disks immediatly?
    The reason i'm asking is because the SQL administrator will migrating the databases from nodeA nodeB to nodeC nodeD. So I only want to provide the SQL administrator with the original cluster and 2 new nodes attached.
    This cluster and nodeA nodeB are in a production environment and there is no way these can encounter any downtime.

    DB:2.65:Replacing Cluster Node Hardware dp

    Newly added nodes do not own any resources. A cluster node has to be configured as a possible owner of a resource group even before it can own it. And even if the node is configured as a possible owner, it will not own the resources if they are not properly
    configured. For example, the SQL Server clustered resource will not failover properly to a node if it does not have the binaries nor is it added as a node in a SQL Server failover clustered instanceEdwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • RELEVANCY SCORE 2.64

    DB:2.64:Cluster Move To New San ac


    Guys, About to move our existing SQL 2005 cluster onto a new SAN.  I've tested the following in a virtual environment and looks to work. 1. Present New SAN luns to Active Node 2. Format, Partition, assign temporary drive letters I have four cluster groups1. Create new Physical disk resource using new san drive on Cluster Group (first group)2. Bring online3. Assign quorum to new physical disk 4. Remove old san resourceNext Group  (SQL Group)1. Create new physical disk resource2. Take SQL Server, Agent,etc offline  3. Put Disks into maintenance mode   CLUSTER.EXE . RES Disk X: /maint:on 4. Xcopy data to new drive 5. Change Dependencies (SQL Server, etc.) to use new resource. 6. Remove old resource 7. Disk Management, remove old drive letter and assign to new drive replacement 8. Bring SQL Group Online etc. Thanks

    DB:2.64:Cluster Move To New San ac

    Aussieron: Did this work OK? Any issues? Thanks.

  • RELEVANCY SCORE 2.64

    DB:2.64:Thread: Best Way To Perform Kernel Upgrades On Cluster With Compiled On-Add-Drivers (Like Ibm Rdac) sm


    Hi,

    I have to upgrade a OES2SP0-cluster (linux-based) to SP2a, but I guess

    my question is relevant for all kernel updates, also:

    my experience from installing a kernel update is, that a new initrd will

    be created, which kicks out all manually installed/compiled drivers.

    Background: I had to use manually compiled drivers to activate RDAC

    support to connect the servers to the SAN using 2 HBAs for each server

    (multipath). At least at this time it had been the only known option to

    do this, because the standard multipath tools do not support IBM DS3x00

    SANs.

    When the server boots up after installing a kernel update, it loads the

    new initrd and therefor does load the standard HBA drivers and no RDAC

    drivers. This causes cluster services to fail loading, because it does

    not find the Cluster Partition and so on. I can recompile/install the

    modified HBA drivers and RDAC drivers and after the next bootup

    everything is fine again.

    But: is this really the way to do this? Can I be sure that SLES/OES does

    not touch the SAN partitions (which are in use be the other cluster

    node!) at all and does not cause any trouble?

    Or which way is the right way to perform such updates?

    Any advice and experiences are appreciated.

    Note: disconnecting the server from the SAN is no option, because RDAC

    needs communication with the SAN during installtion

    Thanks in advance,

    Frank

    DB:2.64:Thread: Best Way To Perform Kernel Upgrades On Cluster With Compiled On-Add-Drivers (Like Ibm Rdac) sm

    Originally Posted by Frank Langner


    Hi,

    I have to upgrade a OES2SP0-cluster (linux-based) to SP2a, but I guess

    my question is relevant for all kernel updates, also:

    my experience from installing a kernel update is, that a new initrd will

    be created, which kicks out all manually installed/compiled drivers.

    Background: I had to use manually compiled drivers to activate RDAC

    support to connect the servers to the SAN using 2 HBAs for each server

    (multipath). At least at this time it had been the only known option to

    do this, because the standard multipath tools do not support IBM DS3x00

    SANs.

    When the server boots up after installing a kernel update, it loads the

    new initrd and therefor does load the standard HBA drivers and no RDAC

    drivers. This causes cluster services to fail loading, because it does

    not find the Cluster Partition and so on. I can recompile/install the

    modified HBA drivers and RDAC drivers and after the next bootup

    everything is fine again.

    But: is this really the way to do this? Can I be sure that SLES/OES does

    not touch the SAN partitions (which are in use be the other cluster

    node!) at all and does not cause any trouble?

    Or which way is the right way to perform such updates?

    Any advice and experiences are appreciated.

    Note: disconnecting the server from the SAN is no option, because RDAC

    needs communication with the SAN during installtion

    Thanks in advance,

    Frank



    I cannot speak for IBM but can only share what we had with HP.

    We are using HP BL460c blades with the qmh2462 OEM mezzanine cards. We are using multipathing, so each lun is seen twice.

    Originally we read/believed HP\'s site that said they only supported SLES if you used THEIR drivers. Well, we put their driver on, and of course, after a kernel update, not only did the driver NOT auto-recompile like it was supposed to (so I did it manually) but then it also left bad parameters in the startup that no longer allowed SLES to see the disks (and we boot from the SAN, so that made for fun times).

    After finally getting the server up and running, I decided enough of that and simply reinstalled (this was a test) and this time used the SLES Qglogic drivers instead of the HP supplied ones.

    That has been the most stable for us and no issues since then.

    I don\'t know though if it\'s possible to just install the RDAC stuff (RDAC should be independent of the HBA drivers as it\'s IBM\'s equivalent of the HP ILO).

    For HP the only thing we will optionally install is the ILO high-performance mouse driver for Linux.

    So the only advice I can give is for you to check with IBM if they have an auto-recompile feature/switch for the driver, or just use the native kernel driver and see if you can just install the RDAC software you need without having to use their special qlogic driver.

  • RELEVANCY SCORE 2.64

    DB:2.64:"Half" Virtual Mscs In Production (Not Testing...) 1f



    I wonder if someone has implemented in his production environment MSCS cluster where 1 node is Physical[/b] and the other is virtual[/b]. (VM resides on ESX 3)

    I assume that the cluster LUNs resides on the SAN as NTFS, and not on the VMFS.

    The same question is relevant for NLB \[Network load balancing] cluster.

    Thanks in advance

    IS

    DB:2.64:"Half" Virtual Mscs In Production (Not Testing...) 1f


    FYI. As of the 8/7/07 SAN Compat guide. Hitachi support MSCS clusters on most of their HW in VI3.

    -MattG

  • RELEVANCY SCORE 2.63

    DB:2.63:Upgrading Merging Of Two Cluster 1x



    Customer is having two cluster with version 6.1.x on MCS Environment . He is planning to move it to the UCS Environment with version 8.6.2  and in the process merge the two cluster . There are also various applications( Meeting place Express , Contact centre Express , unity ) running on the 6.1.x cluster he wants that applications running seamlessly running with 8.6.x version

    Please let me know any conisderation that i need to take care .

    here is my plan ;

    Do a bridge upgrade :

    For one of  the cluster

    - Install CUCM 8.6(2a) on the inactive partition on the current MCS hardware,

    - Bring up the current CUCM cluster on 8.6(2a) and perform DRS backup

    - Build UCS C-Series and install CUCM 8.6(2a) on the new CUCM cluster

    - Perform DRS restore  on the new cluster.

    .

    For other cluster do a BAT on this new cluster .

    Thanx

    anupam

    DB:2.63:Upgrading Merging Of Two Cluster 1x


    In addition you will need to repoint all the apps as well as GWs to the new IP address of the server.

    Also, there is no direct upgrade from version 6 to 8.6 as you will need to go through 7.1.

    HTH,

    Chris

  • RELEVANCY SCORE 2.61

    DB:2.61:Problems With Converting Ms W2k3 Server With Ms Cluster Onto Esx 18



    Hi all,

    We are currently converting our complete productive environment which runs on HP Blades into Vmware. The OS of these systems is Windows 2003 Enterprise Server and most Servers are running with MS Cluster.

    The converting takes place from one domain to a new domain with different IP addresses and different SAN storage for the quorum. However, when the converting finishes (successfully) and the new quroum is manually set up at the new location, the VM has still its old Cluster settings (Cluster name, node name, shared resources, etc) in place, hence the Cluster cannot be started.

    Does VMWare provide a Whitepaper on how to convert physical MS Servers with Cluster running, so that all resources of the physical cluster can be transferred into the newly converted VM Cluster? (E.g. keep the cluster resources, but rename Cluster name and cluster nodes).

    Every help would be greatly appreciated!

    DB:2.61:Problems With Converting Ms W2k3 Server With Ms Cluster Onto Esx 18


    Hi all,

    We are currently converting our complete productive environment which runs on HP Blades into Vmware. The OS of these systems is Windows 2003 Enterprise Server and most Servers are running with MS Cluster.

    The converting takes place from one domain to a new domain with different IP addresses and different SAN storage for the quorum. However, when the converting finishes (successfully) and the new quroum is manually set up at the new location, the VM has still its old Cluster settings (Cluster name, node name, shared resources, etc) in place, hence the Cluster cannot be started.

    Does VMWare provide a Whitepaper on how to convert physical MS Servers with Cluster running, so that all resources of the physical cluster can be transferred into the newly converted VM Cluster? (E.g. keep the cluster resources, but rename Cluster name and cluster nodes).

    Every help would be greatly appreciated!

  • RELEVANCY SCORE 2.61

    DB:2.61:Sun Cluster 3.1 And Emc Clariion San cz


    Hi all, does anyone have documentation/instructions on how to build a (2 node) Sun 3.1 Cluster that is attached to an EMC CLARiiON SAN (ie the cluster shared storage is on the EMC CLARiiON SAN) and with PowerPath handling the multipathing on each cluster node?

    Thanks in advance,
    Stewart

    DB:2.61:Sun Cluster 3.1 And Emc Clariion San cz

    You might want to check a whitpaper from EMC that should outline the EMC clariion specific parts:

    http://www.emc.com/partnersalliances/partner_pages/pdf/300-003-612_a01_elccnt_0.pdf

    Once the storage is seen by both nodes and scdidadm can idientify identical paths to the same lun, there should be nothing emc clariion specific for the installation.

    Greets
    Thorsten

  • RELEVANCY SCORE 2.61

    DB:2.61:Vm Performance Issues - Sophos Anti-Virus Related zd



    We have an environment that consists of 12 hosts split in to two Clusters, 8 in one and 4 in the other. The Cluster of 8 has around 200 Windows XP VM's running within it and we're seeing terrible performance issues, especially in the morning when everyone logs on, opens applications etc. On checking the various perfomance graphs using the Infrastructure Client we're not seeing any high CPU, Memory, Network or Disk usage at all. Both the SAN and Network interfaces are also massively under-utilised so we just cannot work out what exactly is causing the VM's to perform so badly. We've also troubleshooted within the OS and again there is no indication of a performance issue.

    Another interesting point is that the second Cluster only has a handful of VM's within it and these are also experiencing the same issues.

    The only way that we've been able to resolve the issue is to disable Sophos anti-virus which is installed on each VM. With Sophos disabled our users do not complain of slowness at all. We've spoken to Sophos about it who have been no help at all.

    Is there any other troubleshooting that can be done, maybe through the CLI, that could point to what exactly is causing this slow down? Has anyone else had anti-virus performance issues within their environment?

    DB:2.61:Vm Performance Issues - Sophos Anti-Virus Related zd


    Hi Scissor,

    Since my last post we've pulled our VM environment apart and have found a few issues to say the least. The poor performance that we were seeing was related to the SAN configuration and the extra I/O generated by Sophos was pushing it to over the edge. When the DS3400 was originally setup for the VM environment, which was before my time at the company, it was configured as follows:

    - 12 x 700GB SATA disks (10 active, 2 hot spare)

    - 5 disks configured in RAID1 which is mirrored to the other 5, giving 3.5TB usable storage capacity.

    - 9 LUNs (8 x 400GB 1 x 300GB)

    Not the most performance effecient configuration, especially as we now have 200 VMs on this single storage device!

    On top of this, there was an error on the DS3400: Logical Drive not on preferred path due to ADT/RDAC failover. This meant that all the logical drives were controlled by a single controller, rather than being alternated across both. It turns out that our fibre was incorrectly connected and also that our ESX hosts were set to fixed path rather than most recently used. On sorting the fibre topology and changing all the ESX path configuration the DS3400 is free of errors and traffic is balanced across both controllers for the first time since it was installed 18 months ago!

    Although we've resolved various SAN issues, we've come to the conclusion that the physical configuration of the SAN is not going to provide the performance we require given the amount of VMs we have. Our plan is to put in a new SAN spec'd and configured correctly for the amount of VMs and the generated I/O.

    The issue you mentioned has definitely caused us problems in the past and as a result we've scheduled each VM to update Sophos at 15 minute intervals throughout the night.

    We've actually got Sophos disabled within the VM environment for the moment until we get the new SAN installed, which is likely to be in May.

  • RELEVANCY SCORE 2.61

    DB:2.61:Shared Network Storage - Qlikview Cluster ka



    Hi guys,

    we are planing to implement QV-Cluster in our infrastructure (Microsoft Server 2012 R2). As we've noticed:

    Qlikview supports the use of SAN (NetApp, EMC, etc) with a QV-Server provided it is mountend to a Windows Server (2003, 2008) and then shared from that Windows server.

    To reduce cost we worked out two possible solutions:

    1. Cluster Shared Volumes - Scale out Fileserver

    2. Server 2012 as a Fileserver

    Now here my questions:
    Is it possible to use Cluster Shared Volumes - Scale out Fileserver as a Shared Network Storage solution in a QV-Cluster environment?If yes are there any best practices?Is it possible to use an Windows Server 2012 R2 as a Shared Network Storage solution in a QV-Cluster environment?What kind of solution do you prefer except a dedicated Storage like EMC or NetApp?

    Can you guys help me out?

    Thanks in advance

    Kemal

    DB:2.61:Shared Network Storage - Qlikview Cluster ka


    Hi Kemal,

    To start you can talk to your account manager and they can get a Technical Architect to look at your system and decide what would be the best solution.

    However, having shared storage behind a Microsoft Cluster is doable. This will help with the single point of failure. However, The time it takes the cluster to failover can and will give you issues. The reason is that if the PGO files are not available the QVS servers will throw up errors.

    Bill

  • RELEVANCY SCORE 2.60

    DB:2.60:Ciscoworks Lms 3.2 In Ha Or Dr Environment: Cluster Windows 2003 ? mc



    Hello everybody,

    I'm interested in any experience of CISCOWORKS LMS 3.x installation in High Availability or Disaster Recovery Environment,

    especially in Cluster Windows 2003 systems.

    The compagny i work for doesn't plan to qualify VERITAS product and works with Cluster Windows 2003 systems, and SAN.

    Online LMS documentation mainly informs about using VERITAS product use (Installing and Getting Started With CiscoWorks LAN Management Solution 3.2 OL-17907-01 4,CH A P T E R 4-1 Setting Up CiscoWorks LMS in High Availability and Disaster Recovery Environment)

    and poorly about cluster server.

    Is somebody can provide information about such cluster installation, withought VERITAS ?

    Thanks and regards,

    André

    DB:2.60:Ciscoworks Lms 3.2 In Ha Or Dr Environment: Cluster Windows 2003 ? mc


    Windows Cluster is not supported by LMS.  LMS only supports Veritas and VMWare/VMotion for HA.  Veritas offers geographic redundancy where as

    VMWare is more for hot standby HA.

  • RELEVANCY SCORE 2.60

    DB:2.60:How To Enable Clustering After Resource Pool Has Been Created 9k


    I have environment on OracleVM 3.2.8 with VMs running off local storage of single server Oraclevm1. Cluster wasn't enabled at the time of "Server Pool" oraclepool1 creation.I need to enable clustering on OraclePool1 on Oraclevm1, and add second server oraclevm2 and shared SAN LUNs. Once done, I need to move VMs from Local Storage to shared SAN LUN on server pool. I was able to add second server to first "Server Pool".- Tried to add SAN LUN as shared repository, got error below: ovmru_002030E cannot create ocfs2 file system with local file server. Local FS its server is not in a cluster.OVMRU_002036E - Cannot present the Repository to server: . The server needs to be in a cluster. - Tried to move VM to local storage on second server, got error below: .Unable to find an owned and running server to perform copy_file operations on bootdisk- Un-presented first server from pool got error and aborted.- Tried other ways to move VMs to LUNs on another pool, got error below:Virtual machine uses an OCFS2 repository and therefore cannot be moved between pools- Created another Server pool oraclepool2 with "Cluster Enabled, shared SAN LUN" , and added oraclevm2 to oraclepool2 . Tried to manually copy IMG files from oraclevm1/oraclepool1 to oraclevm2/oraclepool2 where shared SAN LUN is configured .Shutdown particular VM on oraclepool1, used scp to copy img files to shared SAN LUN on oraclevm2/oraclepool2 . The copy takes forever and fails in the middle, and scp tries to copy all img files thick NOT thin as they were originally provisioned.I had 300GB thin drive with 4 GB used, scp reports copy of 300GB img file. It seems the trick is to be able to enable clustering on first clustering with local repository and VMs running of local repository.Any feedback will be appreciated.

    DB:2.60:How To Enable Clustering After Resource Pool Has Been Created 9k

    I had the same probblem...I used WinSCP to copy my virtual machines to my local disk, after that I just removed them from my servers, moved the servers to unassigned servers, deleted de current Pool and create a new Pool with HA specifications.

  • RELEVANCY SCORE 2.60

    DB:2.60:64bit Windows2003 Vm's Constantly Blue Screening 73


    I'm having an issue with 64bit VM's within my VMWARE production environment.

    I currently have a 4 x ESX 3.5 Host cluster with a shared iSCSI SAN .

    Up until recently i was able to create new x64bit VM's without any issues.

    Now i cannot create any x64bit VM's, when i deploy a new template or perform a full clean build of Windows 2003 x64, the server blue screen's and reboots constantly.

    Now one of my live servers which was 64bit has also started blue screening.

    I've no problems creating 32bit O.S's.

    Has anyone seen this problem before?

  • RELEVANCY SCORE 2.59

    DB:2.59:Windows 2008 Cluster With San And Das aj


    Is it possible to have both SAN and DAS in a cluster environment? Will they work fine even if they use different software for running multi-pathing? Is this configuration advised?

    Thanks

  • RELEVANCY SCORE 2.59

    DB:2.59:Test Esx Cluster Without A San mz



    Greetings,

    I am wondering if it is possible to setup a test ESX cluster environment without a SAN. In my future test environment, I will have three servers. Will it be possible to have one server to act as the file server, while the other two servers access the shared volume? Almost like an NFS environment.

    Thank you in advance,

    -Tim

    DB:2.59:Test Esx Cluster Without A San mz


    Not like an nfs server, but an nfs or iscsi server. That's what is required to test shared storage/vmotion without a FC san.

    You can use all 3 servers for ESX and then host the virtualcenter and storage server as vms.

    You can do this with either NFS or iSCSI protocols, depending on which you want to test and are more comfortable setting up. There are a variety of open source iSCSI targets mentioned in the forums. Almost any OS these days is capable of hosting NFS service. Read the configuration guides and go for it.

  • RELEVANCY SCORE 2.59

    DB:2.59:Vmm Cannot Perform A San Transfer Because The Virtual Machine (Gvacms3) Is On A Cluster Shared Volume xc


    After upgrating SCVMM 2008 R2 to SCVMM 2012 I'm getting :
    VMM cannot perform a SAN transfer because the virtual machine (GVACMS3) is on a cluster shared volume (C:\ClusterStorage\Volume1\GVACMS3) on the host server (gvahyperv03.europe.loc).
    when I try to store VM to Library server.
    Please help in order to archive this task.
    Acctually VM is stop and have no network assign to his NIC.
    Thansk in advance

    DB:2.59:Vmm Cannot Perform A San Transfer Because The Virtual Machine (Gvacms3) Is On A Cluster Shared Volume xc

    After upgrating SCVMM 2008 R2 to SCVMM 2012 I'm getting :
    VMM cannot perform a SAN transfer because the virtual machine (GVACMS3) is on a cluster shared volume (C:\ClusterStorage\Volume1\GVACMS3) on the host server (gvahyperv03.europe.loc).
    when I try to store VM to Library server.
    Please help in order to archive this task.
    Acctually VM is stop and have no network assign to his NIC.
    Thansk in advance

  • RELEVANCY SCORE 2.58

    DB:2.58:Migrating Sql Data From Msa1000 Disk Arrays To San In Sql 2000 Cluster. zk


     
    Hello All,
     
    I have a 2-node cluster environment I am planning to replace it with new hardware. Currently I am using MSA1000 Disk arrays. Now I would like to move my storage to SAN.
     
    What are the necessary pre-requisite and actions to do this. Any particular thing, I must take care in planning.
     
    Any help would be appreciated.
     
    Thanks
    Dev

    DB:2.58:Migrating Sql Data From Msa1000 Disk Arrays To San In Sql 2000 Cluster. zk

     
    How about if you make up a plan and post it?
     
    Detach db-s, transfer, attach db-s.

  • RELEVANCY SCORE 2.58

    DB:2.58:Thread: Iscsi And Clustering 7a


    Can we integrate an inexpensive iSCSI solution (NW6.5 iSCSI target

    server with SATA Raid5 disks) into an existing 5 node cluster with

    Fiber attached SAN?

    We have a 5 Node NW6.5SP2 cluster with Fiber attached SAN (Xiotech) but

    data is filling up and we would like to migrate old data to low cost

    diskspace.

    Can we define multiple Pools and volumes on the iSCSI target server,

    and mount these on the different cluster nodes?

    Example:

    Node_1 with volume DATA (SAN) and volume DATA2 (iSCSI) in same cluster

    (un)load scripts, so the server can migrate old files from DATA to

    DATA2 transparant to the users. (If the user wants to access old files,

    they should see them in the same drive mapping as now, only maybe

    slower access times)

    Other nodes can have other iSCSI volumes mounted, but should be able to

    fail over to nodes where alreade an iSCSI volume is mounted, and

    migration back should be possible without losing other iSCSI

    connections. (I believe this is only possible from NW6.5SP3 or above?

    Please confirm)

    In the iSCSI 1.1.3 Administration Guide of 28 feb. 2005, I read that

    Cluster Services should be installed AFTER installation of iSCSI

    initiator and BEFORE creating NSS partitions on the shared disk system.

    This is probably the case in an iSCSI ONLY SAN, but is this also true

    in a mixed environment Fiber SAN and iSCSI SAN on the same cluster?

    thanks for all possible information

    Kris

    DB:2.58:Thread: Iscsi And Clustering 7a

    Hi,

    ianb@nospam.com wrote:

    Novell say the target cannot also be an initiator (i.e. you can\'t mount

    the iSCSI volumes on the target itself in a loopback).

    FWIW, it\'s correct that Novell says you can\'t (so it\'s probably not

    supported either), but technically it works just fine.

    Hi Massimo,

    Hence my very specific wording. ;-)

    Cheers

    Ian

  • RELEVANCY SCORE 2.58

    DB:2.58:Clustered Related Query c3


    Hi All,
    Happy Christmas.
    Well, we are planning to go Cluster environment.
    From my ground I can achieve it with 3 ways

    1)Oracle software
    Oracle CRS (Cluster Software) -- Cluster Software
    Oracle 10gr2 with RAC -- Database Software
    ASM, OCFS (Oracle Cluster File System) -- Storage Management
    2)Sun software
    Sun cluster 3.1.8 -- Cluster Software
    Oracle 10gr2 with RAC -- Database Software
    NAS or SAN-- Storage Management
    3)Veritas Cluster
    Veritas Cluster Server 5 -- Cluster Software
    Oracle 10gr2 with RAC -- Database Software
    Veritas Storage Management for Oracle RAC. -- Storage Management

    My question is, do I need to have oracle 10gr2 with RAC, when I go for Veritas. In the past I worked on Veritas with Oracle 8i. Can some one through some light on this.
    And also let me know if there some best ways to achieve the cluster environment.

    Regards,
    Kris

    DB:2.58:Clustered Related Query c3

    Hi All,
    Happy Christmas.
    Well, we are planning to go Cluster environment.
    From my ground I can achieve it with 3 ways

    1)Oracle software
    Oracle CRS (Cluster Software) -- Cluster Software
    Oracle 10gr2 with RAC -- Database Software
    ASM, OCFS (Oracle Cluster File System) -- Storage Management
    2)Sun software
    Sun cluster 3.1.8 -- Cluster Software
    Oracle 10gr2 with RAC -- Database Software
    NAS or SAN-- Storage Management
    3)Veritas Cluster
    Veritas Cluster Server 5 -- Cluster Software
    Oracle 10gr2 with RAC -- Database Software
    Veritas Storage Management for Oracle RAC. -- Storage Management

    My question is, do I need to have oracle 10gr2 with RAC, when I go for Veritas. In the past I worked on Veritas with Oracle 8i. Can some one through some light on this.
    And also let me know if there some best ways to achieve the cluster environment.

    Regards,
    Kris

  • RELEVANCY SCORE 2.58

    DB:2.58:Thread: Moving Cluster To New San 98


    We had Dell consultants come in to help us move our environment from our old SAN to our new one. They used san-copy to transfer the luns. Everything seemed to work out fine, except for our Groupwise cluster.

    After transferring the luns to the new SAN and starting up the nodes in the cluster, the nodes would scan for new devices and partitions for 50 minutes! This normally takes a minute or two. The nodes would also not join the cluster. After fighting with it for 20 hrs on Friday night and Saturday, I had the reps point the cluster back to the original Luns on the old san. I then rebooted the nodes, and it still took about 25-30 minutes on the Scanning for devices and partitions.

    They also appeared like they wouldn\'t join the cluster, but thankfully, after about 25 more minutes after ldncs.ncf was run, one by one, they eventually all joined again. Once they had joined, I rebooted some of the nodes and they were back to normal as far as scanning for devices and partitions and they joined the cluster much faster.

    Now I\'m second guessing myself and wonder if the initial try would have worked if I would have waited for a few hrs for the nodes to join after the Luns had been transferred to the new san.

    Does anyone have any advice on what needs to be done in migrating luns for a cluster? Is this normal for the nodes to scan for devices and partitions for a long time after this change? Any help would be greatly appreciated. We\'re gonna give this another try in February. Thanks

    DB:2.58:Thread: Moving Cluster To New San 98

    bkharmon1 wrote:

    Follow-up.

    We\'re going to give this another go next weekend. One other thought has

    crossed my mind. When should I initiate the San copy for the groupwise

    resources? Last time we tried this, I shut down all the GroupWise

    services, and left the nodes in the Cluster. I\'m wondering if I should

    issue a cluster leave prior to doing the San copy.

    We did similar migration from CX300 to CX4-120 sometime ago..

    Did it on weekend, we shutdown the whole cluster and powered off cluster

    nodes.

    Then SAN copied the GW luns and the splitbrain luns.

    After migration brought system up, went smoothly.

    -sk

  • RELEVANCY SCORE 2.58

    DB:2.58:Essbase On A San 8s


    I am putting Essbase in a production environment on a SAN and want to know:

    1. Throughput expectations
    2. IOPS / latency expectations.

    My IT department just wants to make sure that Essbase will perform on the SAN and has asked me for these specs.

    Thanks in advanced for your help.

    DB:2.58:Essbase On A San 8s

    Hi,

    I have worked with one major client who has successfully run Essbase via SAN only. No hard drives at all in the Essbase servers even for OS! I never benchmarked the SAN vs. local drive performance because we didn't feel the need to do so. There were slight differences noticed which were then offset by other version upgrades and database settings I implemented to keep things running faster than ever. So there may be a difference on an apples-to-apples comparison, but I believe it is manageable. This was a huge SAN disk farm with plenty of space made available to Essbase and fiber optic connections to the SAN. The only trouble we ever had was when the SDMI fiber optic controller failed. For those saying that SAN is not a good option, I would want to know more about the infrastructure choices.

    Good luck!

    Darrell

  • RELEVANCY SCORE 2.57

    DB:2.57:Moving Exchange Search Instance To The New San Disk In A Exchange 2k3 Cluster 8s


    Moving Exchange search Instance to the new san disk in a exchange 2k3 cluster

    DB:2.57:Moving Exchange Search Instance To The New San Disk In A Exchange 2k3 Cluster 8s

    Hi,I assume that this is what you are asking for?http://support.microsoft.com/default.aspx/kb/938445Leif

  • RELEVANCY SCORE 2.57

    DB:2.57:Unable To Perform San Transfer From High Availability Library To Csv Using Microsoft Iscsi Target zk


    I am using SCVMM 2008 R2 with TFS 2010 to set up lab management. I am using Microsoft iSCSI target 3.3.16563 on Server 2008 R2 to provide a cluster shared volume to my failover cluster. I also used the same target to create a highly available
    file server to use as the SCVMM library using the same failover cluster.
    When I try to store a virtual machine in the library from the cluster, or deploy from the library to the cluster, it will only transfer via BITS.
    Is it possible to configure a high availability file server SCVMM library to perform SAN transfer when deploying or storing virtual machines to cluster shared volumes? I have tried this using separate iSCSI targets or having all volumes in one target.

    DB:2.57:Unable To Perform San Transfer From High Availability Library To Csv Using Microsoft Iscsi Target zk

    To answer my own question, Lab Management in TFS 2010 does not support clustered hosts, so the question of SAN transfer is moot.

  • RELEVANCY SCORE 2.56

    DB:2.56:2 Nodes Cluster - Nas Configuration 1k



    My company is looking at deploying a 2 nodes cluster environment. However we need to better understand Qlik cluster solution. .

    To my understanding, for any failover, the 2 QVS will need to access same root folder for all shared objects. Qlik cluster document suggest to use a NAS approach with a separate 3rd server allowing shared access to SAN storage.

    Any suggestion on the configuration of this 3rd server ( CPU, RAM, physcial or not ..) ?

    As reference, both QVS will reside on physical server with about 20 processor core 256G of RAM.

    Thanks in advance

    DB:2.56:2 Nodes Cluster - Nas Configuration 1k


    Access point will be deployed to our virtual server environment. Finally was able to get some feedback from Qlik technical resources and we have now all the configuration information required.

    will be starting on the fun part of development now ....

    Thanks for all replies

  • RELEVANCY SCORE 2.56

    DB:2.56:Hyperv Cluster Maintenance And Vm Shutdowns xc


    Hello all, I have a multi-node 2008 R2 cluster with all VMs running on an iSCSI SAN. I need to shutdown all cluster nodes and VM nodes to perform maintenance on the SAN.
    What is the best way to setup VMs so they stay on their owner cluster node when the nodes are started back? I've tried a few suggestions I've found in this forum but every time I shutdown a VM and then shutdown the owner cluster node it moves to another
    cluster node.
    Thanks for your help

    DB:2.56:Hyperv Cluster Maintenance And Vm Shutdowns xc

    Hi,
    The following post discussed a similar issue, you can refer to:
    Four Node Hyper-V Cluster - Is this the right way to shut all nodes down
    http://social.technet.microsoft.com/Forums/en-US/winserverClustering/thread/f467c70d-7d7e-4e72-b3aa-3648c9ee4853Vincent Hu
    TechNet Community Support

  • RELEVANCY SCORE 2.56

    DB:2.56:Accessing Windows 2008 File Service Cluster With Windows 2012r2 File Services Cluster Using Same San j9


    I am in the process on converting my existing Windows 2008 host servers (2 host cluster)to a Windows 2012 R2 (2 host cluster).
    All 4 servers will be accessing the same SAN via ISCSI.
    Currently the Windows 2008 host servers are running both File server roles and Hyper-V in clustered environment.
    How can I transfer the File Server roles from 2008 to 2012R2 will still keeping access to the same SAN and LUNS?

    Thanks
    --Steven B.

    DB:2.56:Accessing Windows 2008 File Service Cluster With Windows 2012r2 File Services Cluster Using Same San j9

    Thanks Tim for the Prompt Response. I'll keep you posted once I Schedule the cutover.Steven Bond HBR Solutions Inc.

  • RELEVANCY SCORE 2.56

    DB:2.56:Sql 2012 Clustering Question... 9k


    I'm looking for help in understanding my options for clustering SQL server 2012. I'm looking to deploy a two node SQL cluster for a SharePoint environment. (This server will also host other databases as well outside the SP environment.) I have
    share storage (Fiber connected SAN) and was thinking about using Windows Failover Cluster with the shared storage. Our SharePoint consultant is requesting to use SQL 2012 Always On (Synchronous Commit). Which would be the better option for the failover
    cluster setup?
    These are physical servers and not VMs.

    Thank you for any input you can offer.

    DB:2.56:Sql 2012 Clustering Question... 9k

    I agree on the sharing of SQL instances. I never like to share the SQL server when it comes to SharePoint deployments. Always comes down to what the customer wants unfortunately. Thanks again for taking the time to reply Edwin.
    This is what makes the Forums sobeneficial.

  • RELEVANCY SCORE 2.56

    DB:2.56:Sql Failover Cluster 2008 R2: Cluster Disk Not Available Inside Sql Manamagement p8


    Hi,
    In our SQL Environment our SAN Disk for logfiles (L:) isn't available inside the SQL Management Studio, so that we can't move our Logfiles to this location. Drive L is configured as a Cluster Disk of the SQL Server and is available in the disk management
    of course.
    We have also a lab environment, configured identical to the productive one. There everything is fine.
    I suppose there's an option we have missed to configure or something similar trivial. But no idea.
    Has anyone a hint or idea what i can check or do to solve the problem?

    Our Environment:
    MS SQL Failover Cluster 2008 R2,
    2 Nodes,
    4x Shared Storage LUNs (Netapp SAN)

    In the locate database files menu we have only the option to choose drive S:.

    Kind regards!

    DB:2.56:Sql Failover Cluster 2008 R2: Cluster Disk Not Available Inside Sql Manamagement p8

    Hi Sean,
    It's like debugging the silent soundcard and its driver, while the Windows volume is off.
    Thanks Sean, that was exactly the problem.
    Kind regards,

  • RELEVANCY SCORE 2.56

    DB:2.56:Cluster In A Box And Boot From San - What's The Problem? 7j



    Hi,

    I'm using my ESX's for development and testing environment.

    Therefore, clusters are only used for similarity to the production servers and I can use cluster-in-a-box.

    My ESX's boot from san, since we can't back up through lan, and san backups are more convinient.

    Documentation states that cluster on booted from san esx is not supported - even though it clearly works fine, so it seems.

    Has anyone had any issues with that?

    Thank you,

    Shahar.

    DB:2.56:Cluster In A Box And Boot From San - What's The Problem? 7j


    mrbones,

    VMs running MSCS are not VMotion compatible, one of the reasons being that they require their system disks (c:) on local VMFS datastores. With all-SAN setups, you can get timing problems in case of SAN hickups/failures.

  • RELEVANCY SCORE 2.56

    DB:2.56:Thread: Migrate Cluster From Old San To New San a1


    I\'m in charge to migrate our cluster with 3 nodes NW6.5SP8 from old SAN to new SAN.

    We have criticals shared resources : NSS file system, GroupWise, ZENworks, iPrint...

    So I migrate our ftp resource by creating new LUN on new SAN and mapping this to the 3 servers.

    But how to create the mirroring partition for SBD ?

    With NSSMU ? Is it disruptive ? Perhaps, the cluster has to be down before ?

    Thanks in advance.

    DB:2.56:Thread: Migrate Cluster From Old San To New San a1

    Thank you for your help.

    This SAN migration is a success !

    NSS tools are great !

    Thank you for your help.

    EP

  • RELEVANCY SCORE 2.56

    DB:2.56:Failover Cluster Storage Question 78


    Hello everyone. Is it possible to configure a failover cluster environment without a SAN or SCSI Storage? I have 2 identical servers with Win2008 Ent and Hyper - V. I will like to use their local storage only and to create some redundancy or mirroring. Any suggestions on how to do that? Thansk in advance.

    DB:2.56:Failover Cluster Storage Question 78

    Sorry for the delay, I was on vacation.  Hopefully by now you got in touch with pre-sales support and got your question answered.  A majority node set with a file share witness is the best way to go in your two node configuration.  I'll be glad to help you set things up tomorrow if you still need help.David A. Bermingham
    Director of Product Management
    http://www.steeleye.com

  • RELEVANCY SCORE 2.56

    DB:2.56:Stand Alone Server Into Cluster Server...Recomandations And Suggestion 9c


    Hi all
    I have Stand alone server(SQL Server 2000) on Win2k3(SP2).
    Now I want make as Cluster server.
    way I am thinking
    1.I have create fresh Cluster Environment and move the Databases
    2.Arrange SAN drive and move all Databases(Data files/Log files) to SAN drive and Add one more Node and Create Cluster.

    I am realy new cluster some body help me the Direction to move Standalone to Clusteer.

    Thanks in AdvanceSNIVAS

  • RELEVANCY SCORE 2.55

    DB:2.55:Adding A Vm Cluster Node (Iscsi) To Physical Cluster Node (Scsi) - I Have Now Got Iscsi Connection From This To San (Extra Nic) df


    Adding a virtual cluster node to a physical cluster whose shared storage is SCSI (DELL PV220s Disk Array).

    I would like to virtualise this cluster, so far my reading says add a new (third node) in the virtual environment and then evict the physical nodes eventually.

    Could someone guide me how will I set up my shared storage on SAN side.

    So far I have been able to create an ISCSI connection to my dell equallogic SAN and map a volume (LUN) created on SAN to one of my cluster nodes.

    Now could you please give me the steps I need to take in order to add a third node so that it can become a part of my physical cluster. Aim is to get rid of the physical cluster in the end

    Thanks alot

    DB:2.55:Adding A Vm Cluster Node (Iscsi) To Physical Cluster Node (Scsi) - I Have Now Got Iscsi Connection From This To San (Extra Nic) df


    Hello,

    Moved to Virtual Machine and Guest OS forum.

    Adding a virtual cluster node to a physical cluster whose shared storage is SCSI (DELL PV220s Disk Array).

    You will first need to migrate the data from the PV220s to some form of shared storage that ESX recognizes as shared storage. PV220s are not considered shared storage for ESX.

    I would like to virtualise this cluster, so far my reading says add a new (third node) in the virtual environment and then evict the physical nodes eventually.

    If your data is on some form of remote storage, then yes this is possible.

    Could someone guide me how will I set up my shared storage on SAN side.

    So far I have been able to create an ISCSI connection to my dell equallogic SAN and map a volume (LUN) created on SAN to one of my cluster nodes.

    Check out http://www.vmware.com/pdf/vi3_35/esx_3/vi3_35_25_u1_mscs.pdf for assistance on this.

    Best regards,Edward L. HaletkyVMware Communities User Moderator====Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.Blue Gears and SearchVMware Pro Blogs -- Top Virtualization Security Links -- Virtualization Security Round Table Podcast

  • RELEVANCY SCORE 2.55

    DB:2.55:Changing Our Virtual Environment Architecture For Dr: Any Advice? d8


    Currently we have four nodes in the cluster: two nodes for the application (SharePoint 2007, running two Hyper-V guests on the host)and two nodes for the database server (SQL Server 2008) (note; the db servers are NOT vm's. The DB servers are reading/writing
    from a SAN).
    (I inherited this environment). Backups are currently being done on site. There is no failover environment offsite. I want to change this.
    I want to create a failover site in another geographic area. There is some concern about the performance of the machines from the management. I setup data collector sets and the machines perform fine and meet the needs of the users we have. I think the concern
    is really that SQL is running on a machine 4 times as powerful as the SharePoint Server.
    Here's what I think should be done:
    First eliminate the SAN, virtualize everything not just the application server, split the nodes up and replicate to another location for business continuity.
    Site 1: West Coast
    SQL server on smaller, less powerful node 1, virtualized (Hyper-V)
    Application server on more powerful node 2, virtualized withfour web front ends (hyper-v)
    Site 2: East Coast
    SQL server on smaller, less powerful node 3, virtualized (Hyper-V)
    Application server on more powerful node 4, virtualized withfour web front ends (hyper-v)

    Thoughts? Warnings? Cautions? Pitfalls?

    DB:2.55:Changing Our Virtual Environment Architecture For Dr: Any Advice? d8

    Hello,
    Did you have further questions?
    Nathan Lasnoskihttp://blog.concurrency.com/author/nlasnoski/

  • RELEVANCY SCORE 2.55

    DB:2.55:Dpm 2010 Backup Using Hardware Vss Provider? k3


    How can I check to see if DPM is using the SAN VSS hardware provider rather than the default software VSS Provider? Is there an event logged anywhere or some sort of test I can perform?
    Does the DPM Server need to be connected to the SAN in order to use the SAN Hardware provider?

    eg. I have a DPM Server that will protect a Hyper-V cluster. I want to improve performance of backups and use the SAN hardware provider (that I have installed). Does the DPM server also need to be connected to the SAN (in this case via the ISCSI initiator).

    Microsoft Partner

    DB:2.55:Dpm 2010 Backup Using Hardware Vss Provider? k3

    The best method would be to use yourSAN Management tool/interface to monitor during the time a backup is taking place. You should see the hardware snapshots being generated. Other ways would be to look for events in the system/application event logs
    logged by the hardware provider itself as *most* log the successful completion of a backup.Cheers, Tyler F [MSFT] - This posting is provided AS IS with no warranties, and confers no rights.

  • RELEVANCY SCORE 2.55

    DB:2.55:Rac Doubts x7


    Hi All,

    I am very new to RAC and trying to learn it i have some queries

    i think raw device=chunk of hardisk space or SAN space which is not formatted and not having any file system just.

    -How can SAN raw device be used in 2 node rac cluster when a SAN raw device can only be given to 1 node?Hope all will understand.One of my friend is telling in his environment they have 2 node cluster rac with san raw devices and ASM.As far as i know when we create raw device we will alot it to node by name or ip.how come same device can be given to another node?My friend is not technicaly strong so i am asking here.

    Regards

  • RELEVANCY SCORE 2.54

    DB:2.54:Microsoft Cluster - Add 3rd Node (Vm) San-Iscsi To 2 Physical Nodes Das (Scsi) zd


    Hi Please

    guide me step by step

    how to add a 3rd cluster node in the VM environment with ISCSI SAN to

    the cluster (2 node) which is in physical enviorment and is attached to DAS (SCSI)

    I would be really thankful.

    Regards

    Rucky

    DB:2.54:Microsoft Cluster - Add 3rd Node (Vm) San-Iscsi To 2 Physical Nodes Das (Scsi) zd

    Hi tex

    what do u mean by shared disk cluster

    (we have DELL PV220s (disk array) -

    Quarom disk (Q:drive) (1 gig) is on the the array

    and

    R:drive (which is a 1TB volume on that disk array as well)

    I have got the connection going from Cluster/DAS/PV220S (1 of the nodes) to SAN/Iscsi via adding additional network card in the cluster node, creating a LUN on the SAN and then rescsanning the disk on cluster node. And i think i can now use RObocopy to copy my files.

    One question which comes to my mind is if i were to use RDM's for my cluster or say even a file server (stand alone), connecting to these RDM's.

    when i backup my VM or the cluster. Will the VMDK file contains all the files and the file size say would be 1TB. (my data) or is there a way i can only backup the OS/Cluster/File server and not the RDM volume/LUN

    and use some other technolgy to back that up.

    ONLy reason i think of that is, if the machine goes down, if it was the machine which we need to restore it would be really quick. but if had 1TB of data attached to it., it would takes ages, wouldnt it?gt; even thought its comined in 1VMDK file?

    Please guide me in the right direction, would be really grateful to you.

    Regards

    Rucky

  • RELEVANCY SCORE 2.54

    DB:2.54:Will A 5-Millisecond Interruption On Our San Corrupt Sql Server? 7s



    We are running various versions of SQL Server Standard (2000, 2005, 2008, 2008 R2) on both 32-bit and 64-bit servers, on Windows 2003 Standard and Windows 2008 Standard, in a VMWare environment
    against an Equallogic SAN. We are about to perform some maintenance on the SAN which may result in interruptions of up to 5 milliseconds. The admins assumed SQL Servers would be OK with an extremely short interruption. Should I be concerned? Does SQL Server
    have a tolerance for I/O interruptions, and if so, how long is it?

    DB:2.54:Will A 5-Millisecond Interruption On Our San Corrupt Sql Server? 7s

    Hi Nicole,
    I agree with Josh, however:
    It depends what you mean by interupted. Unlike for example a
    fileserver, sql server doesn't tolerate any interuption errors, only queueing
    and delays. The moment somewhere in the stack a failure message is returned you're in trouble.
    For example our sql servers tolerate a 50 second SAN failover from
    datacenter 1 to datacenter 2, but the servers will fail instantly when
    they try and write a single IO and get a failure message back from the
    storage drivers.
    We had this problem our selfs for over a month because of a MPIO misconfiguration. During every datacenter failover a couple of the SQL servers would randomly fail. They lost connection to one of the system database files, only momenteraly, but that was
    all that was needed. Expect the server to behave strange, SCOM will not notice a problem, read queries will work fine (as long as data is still in the buffer cache) write qeuries will fail..
    Contact your SAN vendor to make sure that the maintenance involved just pauses the IO traffic and doesn't force an error message backup up the stack somewhere.

    regards,
    Edward

  • RELEVANCY SCORE 2.54

    DB:2.54:Fabric Metrocluster In San Configuration dp



    Hello All,

    We are running several metro clusters in NFS and CIFS environment in our work place but haven't got a chance to configure one in SAN. I have couple of questions regarding Active/Active Metrocluster in SAN which are

    1. How can a host access LUN on primary site. Do we have to buy seperate FC switches other then the metro cluster switches for data network for SAN configuration ?

    2. How can a host access LUN in the event of disaster. Let suppose a NetApp FAS6280 (primary site system) mother failed at some time. So, in that case how can a host access LUN (single image) at secondary site ?

    Regards,

    Farhan Ghazanfar

    DB:2.54:Fabric Metrocluster In San Configuration dp

    "Do we have to have a fc path to the remote site" ?
    Yes, that's correct - you have to stretch both your SAN LAN across two sites to make the magic work.

  • RELEVANCY SCORE 2.53

    DB:2.53:Whats An Entry Level San For Esx? 7f



    Hi, there...

    I want to ask a general question - one that is more a matter of opinon and experience than techical config problem...

    Where I work we use an MSA1000 SAN. I am aware this a "entry level" SAN targetted at small/medium businesses. The SAN configuration is fully redundant - with multipathing in place. The SAN has both SP's (Active/Passive) and its a 2GB throughput via the switches/and Qlogic Cards (fibre channel)...

    So here's my question... Is this too "low-level" for ESX - if I am using all the feaures (vmotion, cluster-across-boxes, and cluster between physical servers and VM's)...

    What would you regard as an entry level SAN for ESX (as opposed to the standard use of a SAN)?

    Do you buy a SAN purely for use by ESX/vMotion or do reuse existing SAN infrastructure or scale up existing SAN infrastructure to allow for the introduction of ESX?

    I guessing that a lot depends on something which will vary from one company to another - which is the amount of disk activity the VM's create. So I'm guess that this is actually a difficult/impossible question to answer - without further analysis of the environment...

    Nonetheless - whats the opinon?

    Any opinons happily recieved...

    Mike

    DB:2.53:Whats An Entry Level San For Esx? 7f


    The answer to your question is "no" and "yes"...

    The MSA1000 is a great entry-level SAN and can comfortably support an entry-level ESX environment. Don't expect to connect more than two ESX hosts to the MSA and get great performance.

    If you are looking to support an "enterprise" environment, then you'll need to look at "enterprise-level" components...

  • RELEVANCY SCORE 2.53

    DB:2.53:Esx 3.5 On Old San To Esxi 5.1 On New San k9



    Hi all,

    Our main office environment is ESXi 5.1 / vCenter 5.1 / shared storage (fiber channel). And we have a remote office that's running 2 hosts on ESX 3.5 with their own shared storage (fiber channel), managed by a separate vCenter Server 4.0.

    We purchased new servers and a new storage array to replace the hardware at the remote office. We want to add the new servers to the existing vCenter 5.1 farm (separate cluster since they're in a different city) with the VMs moved over to the new SAN.

    Here is my plan for accomplishing this --

    1. Install ESXi 5.1 on the new servers.

    2. Add the new 5.1 hosts to vCenter 5.1, in a separate cluster.

    3. Mount the existing SAN (where the VMs reside) to the new hosts.

    4. Mount the new SAN to the new hosts only.

    5. In vCenter 4.0, power off the VMs, unregister from inventory.

    6. In vCenter 5.1, register the VMs into inventory, then power on.

    7. Perform storage vMotion of the VMs, to move their VMDKs from the old SAN to the new SAN.

    8. Shut down old hosts and old SAN.

    Will this work as I expect? Is it safe to mount the same datastore to ESX 3.5 and ESXi 5.1 hosts at the same time? Any feedback would be greatly appreciated. Thanks in advance.

    DB:2.53:Esx 3.5 On Old San To Esxi 5.1 On New San k9


    Please backup all your VM's to an offsite device (large drive ect..) JUST IN CASE.

    Overall the plan seems sounds.

  • RELEVANCY SCORE 2.53

    DB:2.53:2 Node Cluster With Quorum Availability k9


    Hi
    I have a quick question that could do with clarifying if you all wouldn't mind:
    I have a 2 node hyper-v cluster with multipath, a dedicated lan for host management and a dedicated lan for cluster comms.
    In the event that the san goes offline (taking the VHDs and the quorum with it) and the management network goes offline at the same time, should the cluster remain online? In theory the two nodes should still be able to communicate vai the cluster
    network so i would have thought that the cluster would stay online. In my lab environment, the cluster goes offline.
    Thoughts?

    DB:2.53:2 Node Cluster With Quorum Availability k9

    rkr31,

    Was the management NIC the one for the node controlling the Core Resources? It probably was if that node is the highest priority. Right click on properties of the cluster name or ip address and see one of the tabs to see the different nodes and their priority.

    See my thread a little ways up titled Pull cable for NIC for Core Resources.. for a very similar situation. I wouldn't think the quorum model makes one bit of difference as long as you have another cluster network which is what is required. You
    have to have 2 as per Microsoft. I have 3 and mine did the same things as yours even though I have 3 nodes (no witness disk) and run Node Majority. Unfortunately I don't have a lab to test and run in production so to test this I need to Live Migrate all VMs
    off the node and then pull the cable as a test.

  • RELEVANCY SCORE 2.53

    DB:2.53:Cluster Spread Over Multiple Sites (San) 1z


    I was wondering if it is possible to have a single HA virtual environment, spread across two sites?

    We are looking to install two ESX servers in one building and two in another. We already have fiber connections between the two sites. We would like to have a single virtual environment with VMotion etc, I am just not sure if SAN technology is available that will provide a single storage volume across two sites?

    The last SAN I worked with had a 7min replication cycle between sites so a single cluster was not an option.

    Any advice would be very much appreciated.

    MRX.

    DB:2.53:Cluster Spread Over Multiple Sites (San) 1z


    Do you think on that basis the cluster would not perform very well?

  • RELEVANCY SCORE 2.53

    DB:2.53:Using Multiple San Solutions z1


    Hi everyone,

    I am reviewing our vmware environment and I have a question regarding multiple SAN implementations.

    Currently we have diverse clusters and I am looking at combing these where possible.

    In our environment we have two different SAN solution and this could be more in future.

    We will be looking into storage virtualization but that is another story.

    In the mean time I would like to know if it is ok to connect a host to more than one storage array?

    Actually all hosts in cluster would probably have this configuration for obvious reasons.

    Reasons for wanting to do this is obviously because we have them and also so I can mix workloads on the servers.

    My fast or critical vmdks could be stored on the enterprise SAN while the entry level SAN can be used for less

    critical VMs.

    cheers

    DB:2.53:Using Multiple San Solutions z1

    We have a FC SAN Storage connected dual-path to 2 switch FC and 2 ESX 3.5 server enterprise even those with 2 HBA. We would like to expand our infrastructure by adding a second FC SAN Storage. The FC switch ports are still available for the new Storage.

    See some anomaly or mistake?

    The two hosts can see both SAN and then proceed with the migration of the disks of the machines guest?

    Thank you,

    Giacomo

  • RELEVANCY SCORE 2.53

    DB:2.53:Npiv Vs Windwos 2008 R2 Failover Cluster 1d


    Dears;we havge implemented windows 2008 R2 Hyper-V solution with SCVMM R2. 6 hyper-v server clusteered using windwos 2008 R2 failover cluster managed by the SCVMM R2 server.we have enabled NPIV on our switches and woulk like to use the features for NPIV.what can i get from this feature (NPIV) since i have windwos 2008 R2 failover cluster ?can i perform SAN migration ( using SAN transfer rather than cluster, Network transfer) in our case?can i perform backup over SAN rather than backup over network ?all what i read is NPIV using stanalone Hyper-V server ( not in cluster) ? is this means that its a type of high availbility ? which i better NPIV or windwos 2008 R2 faioover cluster ?

    DB:2.53:Npiv Vs Windwos 2008 R2 Failover Cluster 1d

    Can NPIV be used to give VM guest machines direct access to a LUN? Does the VM see a virtual HBA?We are trying to implement Citrix Provisioning Server high availability. The two Citrix servers are VMs within a hyperv-R2 cluster with VMM R2. The Citrix documentation says that both Citrix servers must have access to the same storage device. NPIV came to mind. So did iSCSI. Any thoughts?

  • RELEVANCY SCORE 2.53

    DB:2.53:No Disks Were Found On Which To Perform Cluster Validation Tests zx


    I am trying to make a cluster of two Hyper-V machines running Windows Server 2012.
    There is a SAN available in which I have created several partitions to use such as J drive, K drive L drive etc.
    All these drives are available as network share and mapped on both machines.
    The primary hard disk of both machines is C drive.
    When I run Cluster Validation Wizard it throws warning for several file/disk related operations such as:
    Validate Disk Access Latency
    No disks were found on which to perform cluster validation tests

    Validate Disk Arbitration
    No disks were found on which to perform cluster validation tests
    How do I fix this warning?

  • RELEVANCY SCORE 2.52

    DB:2.52:1 Cluster, 2 Hosts, 1 San, 1 Vm File Server a9


    Okay, How do I make this work. I have 1 File server that I migrated form Physical to Virtual. Its currently attached via fiber channel to a SAN where all Datastores are contained. My goal is to privide a clustered environment for this and other VMs to utilize HA and/or vMotion technologies built into vCenter.

    I know I can setup two hosts, connecting them both to the SAN then establishing a cluster but is there a way to do this without having to double my storage requirements. e.g. Cluster 1 contains Host A and Host B setup in HA mode. All virtual machines are stored on LUN 1 of the SAN Both Hosts A and B access the same VM files on LUN 1 if either Host fails.

    DB:2.52:1 Cluster, 2 Hosts, 1 San, 1 Vm File Server a9


    Thanks Rich. That helps a bunch

    Michael McCully

    IT Specialist, Systems Administrator

    DOC/NOAA/NMFS

    Northwest Fisheries Science Center

    206.860.6796

  • RELEVANCY SCORE 2.52

    DB:2.52:Thread: Question About Memory Requirements Using A San as


    I have been task to upgrade our current single NetWare 6.5 SP3 server to a 2 server cluster in order to increase stability. I understand I can either use with a FO SAN (like the MSA1000) or an ISCSI server.

    I am new to the whole SAN environment.

    On a typical NetWare server, the volumes are mounted using directly attached drives, therefore, the server itself requires sufficient memory to handle the disk caching based on the volume size. But in a SAN solution, which server requires the memory? Does the server in the cluster require the disk caching memory or does the SAN server require it?

    David Wilkerson

    Network Administrator

    TSM Corporation

    David.Wilkerson@TSMCorporation.com

    (901)373-0300 x 130

    DB:2.52:Thread: Question About Memory Requirements Using A San as

    On which SAN to choose look at the number of IO operations/sec you will

    need, thinking about the future.

    Just talking HP gear:

    Shared SCSI is lower end, the fiber attached HP MSA is the next step

    up. If you out grow that, you will need to look at a HP EVA system.

    Each step up is more money and quite a bit faster.

    Most of my clients use the MSA series with the 1500 as the new one on

    the block. A very nice box that will allow SCSI and SATA attached

    arrays at the same time.

    -' + '-

    Timothy Leerhoff

    Novell Support Forum Volunteer Sysop

    The future comes slowly, the present flies and the past stands still

    forever

  • RELEVANCY SCORE 2.52

    DB:2.52:Upgrading The Esx 4.0 Environment To Esxi.4.1 c7



    Hi,

    In our environment we are using VMware ESX 4.0.0 but now we are trying to upgrade our Environment from ESX 4.0.0 to ESXi.4.1.0

    All ESX host are in bladecenter and they are using Share storage (SAN). All ESX are in one cluster in vCenter. In vCenter Vmotion, DRS and HA of that cluster is on.

    If any on has upgraded their VMware environment from ESX 4.0 to ESXi 4.1.0 then please let me know all upgrade process.

    what are the important things I need to take care. Also I want to know what documentation i need to do before upgrading to ESXi 4.1.0

    Can I add both ESX hosts and ESXi hosts into one cluster and create mixed environment?

    If there is any suggestions, process , material or screen shots then please let me know.

    Thanks Regards,

    DB:2.52:Upgrading The Esx 4.0 Environment To Esxi.4.1 c7


    Hi Troy,

    Thank you for the answers document link.

    Regards

    Kapil.

  • RELEVANCY SCORE 2.52

    DB:2.52:Exchnage 2007 Cluster ca


    Hi,We are trying to implement  test environment for a 2 node MS Exchnage 2007 cluster. As it is only a test environment, we can not afford to buy a SAN/ ISCSI/ IPSAN to implement the cluster. In fact we are using two high configuration PCs as the two Exchange server nodesI would like to know if we can implement the Exchange 2007 cluster using a third PC or a network drive on a seperate machine in the LAN as the common storage (in place of a SAN box or ISCSI Box). If this is not possible, is there any other simple and cost effective way by which we can implement MS Exchnage 2007 cluster and test?Your kind feedback will be appreciated.Thanks regards,Srinivas

    DB:2.52:Exchnage 2007 Cluster ca

    Hello Srinivas,
     
    Thank you for your post!  I would suggest posting your question in one of the 'TechNet Forums » Exchange Server' located here:  http://forums.microsoft.com/TechNet/default.aspx?ForumGroupID=235SiteID=17Have a great day!

  • RELEVANCY SCORE 2.52

    DB:2.52:Clustering Question 7j


    Hi everyone!

    We have a small environment with 2 hp dl 365 hosts and 1 iscsi san. We use Vsphere 4 ESS.

    We have 1 terminal server outside of the SAN, which run 2003 server enterprise. This was done for speed reasons mainly, as we found out that a virtual terminal server was a little too slow, with 35 users using ms office and printing.

    We would like to cluster this server with another for DR reasons.

    Is it possible to build a terminal server in vmware, and cluster it to a server outside of the SAN?

    OR

    is it best to build both servers in vmware, and cluster them on the SAN?

    Peter

    DB:2.52:Clustering Question 7j


    Hi

    As far as I am aware MS Terminal services is not cluster aware, however you should be able to add a second TS as part of a Network Load Balanced (NLB) array.

    Obviously there is no reason this cannot be a virtual server and you would also get the benefit of having a second live server to service clients, assuming you are following MS best practices and have relocated terminal server user profiles to a centrally available file sharing area.

    Couple of documents worth looking at

    First is relating to ESX 3x but is still relevant http://www.vmware.com/files/pdf/implmenting_ms_network_load_balancing.pdf

    MS article http://support.microsoft.com/kb/323437

    I have done NLB terminal servers before and it was a while ago, the simplest way is to add both servers to the same OU in active directory then apply a GPO that locks down users etc and does the redirect on user profiles for the TS environment etc

    I seem to remember having problems when I tested using VMware and using Unicast settings, so ended up using Multicast

    Message was edited by: ScottBentley

  • RELEVANCY SCORE 2.52

    DB:2.52:Thread: Groupwise Cluster Planning. p1


    We have a lawfirm that lost groupwise and account for almost a week due to hard issues. I am now faced with the task of designing a 99.999% uptime groupwise environment. Cost is not a large issue, but where do I start. My base plan is to use an IBM blade center with 2 blades and a fiber channel san in a two node cluster with OES Netware.

    Is a two node cluster the right solution?

    Or should I plan a four node cluster?

    Does OES2 netware run well on IBM HS21 blades and an ibm ds3400 San?

    DB:2.52:Thread: Groupwise Cluster Planning. p1

    I have one environment that is supporting a 300 person architectural

    engineering firm that is running on a 3node cluster using IBM rack servers

    and an IBM 4300 series SAN. File access is spread across two of the

    nodes, WebAccess, NetStorage and iPrint are on one node, and all of the

    remaining GW agents + Novell a Storage Manager engine are running on the

    third. The two non-GW nodes are also running individual instances of

    iFolder. Everything but iFolder is configured to be able to be migrated

    to any of the other servers.

    NetWare6.5SP7. GW7.0.2HP1

    Performance is good. It was slowing down after a couple of years using

    4200series SAN components, but an upgrade late last year improved things

    quite a bit.

    We had some issues initially until there were fixes in the firmware and

    microde on the SAN and HBAs. It\'s configured with dual redunancy, quad

    pathing on the fibre.

    I also have another AE firm that is running a similar configurationm but

    using an Equallogic iSCSI SAN for about 200 users.

    Both sites have GB to the desktop.

    -jt

    On Thu, 07 Feb 2008 21:56:02 +0000, sdk_dude wrote:

    We have 52 lawyers and 36 support staff. We have webaccess, ifolder,

    netstorage, iprint in place now. The Lawyers love it whne they can

    bring up docs or email from court on laptops and some are using pda\'s now

    also.

    They want to use a blackberry server to be added to the mix also.

    The main sys admin is not into linux, so I had hoped to stay away from it.

    thanks,

  • RELEVANCY SCORE 2.52

    DB:2.52:Database Configuration In Windows Cluster Environment a7


    1) I have installed Oracle 10g DB R2 on both nodes in windows cluster environment.
    2) i made database on Node1 and place all DBF, CTL and LOG files at SAN Shared location.
    3) Node2 dont have any Database..

    I want to put Node2 in use of Node1 Database in case of Node1 crash. how can i do this? means what changes i have to made in Node2 configuration?

    DB:2.52:Database Configuration In Windows Cluster Environment a7

    yes you are right...but i want to test this approach also...i hope you understand...

  • RELEVANCY SCORE 2.52

    DB:2.52:Reduce Cluster Failover/Failback Times mj


    Hello,
    I am using sql server 2012 Standard Edition in a test environment. It is a clustered env with faster san. Whenever I perform a failover of sql server services it takes about 30seconds and when I failback it takes around 35 seconds. I am only hosting 2 databases
    and they are 2gb and 500MB in size. This is a VM with 6GB memory. I reserved 4 gb max setting to SQL and left 2GB to OS.
    I implemented instant file initialization and alsoreduced the number of vlf's to see if that helps to reduce the failover or fail back time.
    Can some one please tell me how can I bring this time to less than 10 sec.
    Thanks a bunch

    DB:2.52:Reduce Cluster Failover/Failback Times mj


    You can also see the event logs wherein what all processes are going offline and coming online in sequence and how much time they are taking. Based on that you can work with that vendor be it Storage\AD\others.
    Since these are VMs, so check esx servers are good in all health stats. Check also is there any packet drops happening, along with your network team.
    Hopefully you don't have any DC issues like temperature etc.Santosh Singh

  • RELEVANCY SCORE 2.51

    DB:2.51:Installation Of Performance Manager In Veritas Cluster 9j



    We are in the process of setting up a new environment for both Patrol and Patrol Perform / Predict. This will see our PEM and well as our Performance Manager console being clustered across two Solaris servers using VCS (a separate service group for each). The disk will be presented from our SAN. I'm pretty clear how we will go about the installation of the Performance Manager, but I'd appreciate any advice as to the agent installation. Ideally, we would like to monitor/collect data on both servers all the time so would like the agent installed on both servers. Can we do this even though we will have the Performance Manager only operating on one at any given time? i.e. we install the agent outwith the cluster at a server level?

    DB:2.51:Installation Of Performance Manager In Veritas Cluster 9j


    okay - i've got a call open with support just now, but this is what I think may work for running the Managing console in a failover environment Just to recap, we have a Veritas service group in which all of the Performance Manager components and data resides. The filesystem for the Service Group sits on our SAN and is zoned to two Sun servers, allowing the Performance Manager to run from either. The service group has its own IP address and we connect to the console using it.At present we have an identical entry in cron on both machines in the failover group i.e.* * * * * /ppp/best1/Patrol3/Solaris-2-8-sparc-64/best1/7.1.20/bgs/bin/perl -w -I /ppp/best1/Patrol3/Solaris-2-8-sparc-64/best1/7.1.20/bgs/lib/PERL /ppp/best1/Patrol3/Solaris-2-8-sparc-64/best1/7.1.20/bgs/scripts/pcron 2/dev/nullis on both systems.This means that on the server on which the console is running there are no issues i.e. this will invoke the pcron script as designed. The only problem here is that on the other server cron will try and execute this entry every minute - which it will fail to do so as the filesystem will be attached to the other side of the failover group. Consequently, the mail file of the best1 user on this other system will grow every minute. This is far from elegant, but I believe it would work if we can make the pcron itself independent of the host and we could put processes in place to deal with the mail file issue.For the pcron, we have the following in the repository directory - we are running on the s1emp1 server at present but up until the 24th of September we had been on s2emp1. Clearly, there are counter and pcron files created for each hostname:PPP:/ppp/best1/Patrol3/Solaris-2-8-sparc-64/best1/7.1.20/bgs/pcron/repositoryls -ltotal 16-rw----- 1 best1 best1 4 Oct 23 00:00 s1emp1-best1.counterbr /-rw--- 1 best1 best1 4918 Oct 23 00:21 s1emp1-best1.pcronbr /-rw--- 1 best1 best1 3 Sep 24 00:00 s2emp1-best1.counterbr /-rw----- 1 best1 best1 295 Sep 24 00:20 s2emp1-best1.pcronThe situation is complicated somewhat by the pcron counter file - this contains the number of entries that have been passed to pcron file. This increments by 1 every time a file is added. Additionally, each new entry to the pcron file is assigned an ID which is based upon this counter. Consequently, I wouldnt think that merely copying the entries from one pcron file to the other would after a failover would necessarily work.Instead, looking at the two perl scripts that are used to run pcron:/ppp/best1/Patrol3/Solaris-2-8-sparc-64/best1/7.1.20/bgs/scripts/pcrontab (this script used to build and maintain the pcron file)and/ppp/best1/Patrol3/Solaris-2-8-sparc-64/best1/7.1.20/bgs/scripts/pcron (this script used for logging purposes)(There also the additional pcrontab.sh script which acts as a wrapper for pcrontab)In each of these scripts there is a getHost subroutine which is used to return the physical hostname. In order to make pcron work on either host (i.e. make it independent of the hostname itself) should be we just be able to change this routine to return host as the name of our service/failover group? If this did work, then this would remover any dependency on the hostname for pcron and should mean that it would continue to function even after a failover to the other server.We will try and test this hopefully either today or over the weekend, but I'd appreciate any thoughts any of you have on this approach?!CheersJohn

  • RELEVANCY SCORE 2.51

    DB:2.51:Thread: Migrating Nss Trustees From One Cluster Volume To Another kj


    Hi everyone,

    I\'m currently in a situation where we are migrating from one SAN to another in an OES2sp1 x86_64 environment. The old iscsi SAN is still connected and in production. The new SAN is connected and we want to migrate the data from the old volumes to the new ones.

    The problem, how can we transfer data from one to another clustered nss volume including the appropriate rights? I\'m pulling my hair out on this one.

    Thanks for your help!

    Regards,

    Justin Zandbergen

    DB:2.51:Thread: Migrating Nss Trustees From One Cluster Volume To Another kj

    Just ran the script on an old to a new home volume with 80GB of data connected with iscsi 1Gbit. And it took 5 minutes :-) i\'m so happy! :-)

  • RELEVANCY SCORE 2.51

    DB:2.51:Heterogeneous Cluster fz


    Hi All,

    Has anyone tried to build a 'Heterogeneous cluster'?

    i.e. A cluster of 2 machines that connect to the same back end database,
    but perform different functions? This may seem a strange way to do things,
    but it is necessary in our environment.

    Would this be a true 'cluster' in the weblogic sense of the word?
    weblogic.cluster.enable etc.

    Regards

    Steve Haigh

    DB:2.51:Heterogeneous Cluster fz

    Yes, if you have the pool in server1 and register the DataSource in JNDI, from server2 you can
    lookup server1's JNDI and grab that DataSource.

    Gene

    "Steve Haigh" shaigh@squaretrade.com wrote in message news:3a96f0f8$1@newsgroups.bea.com...
    What about if you want 2 machines to use the same jdbc pool but want them to
    perform different functions? Can I do this using 'Tiering'?

    Steve

    Gene Chuang gchuang@kiko.com wrote in message
    news:3a96d786$1@newsgroups.bea.com...
    It wouldn't be called clustering, but tiering.

    There's no point in containing two completely heterogenous servers in the
    same cluster. Clustering
    is for the purpose of loadbalancing and failover, and these do not apply
    to diparate services!

    Gene

    "Steve Haigh" shaigh@squaretrade.com wrote in message
    news:3a95b5ad$1@newsgroups.bea.com...
    Hi All,

    Has anyone tried to build a 'Heterogeneous cluster'?

    i.e. A cluster of 2 machines that connect to the same back end
    database,
    but perform different functions? This may seem a strange way to do
    things,
    but it is necessary in our environment.

    Would this be a true 'cluster' in the weblogic sense of the word?
    weblogic.cluster.enable etc.

    Regards

    Steve Haigh



  • RELEVANCY SCORE 2.51

    DB:2.51:Hardware Migration From Non-Cluster To Cluster Environment jp



    Dear All,

    We are planning for Hardware migration from a non-cluster to cluster environment.

    Existing Landscape details:

    Distributed system CI and DB on different hosts.

    Hardware and OS detils (same for both the nodes):

    Server :- SUNW,Sun-Fire-T200

    OS:- SunOS 5.10

    Version:- Solaris 10

    Firmware Version:- OBP 4.20.4 2006/05/12

    DB : Oracle 10g

    New server deails:

    Hardware clustering, please find the below details:

    1. Sun OS Release:- 5.11
    OS Version:- 11.0 64-bit (Solaris 11)

    Hardware Details:- Product Name:- SPARC T4-1

    Sys Firmware Version:- Sun System Firmware 8.1.4.e 14/01/2012

    Sun OS Release:- 5.11

    OS Version:- 11.0 64-bit

    DB: Oralce 10g

    We are doing the harware migration and the exisintg is a non-cluster environment and the newly migrating to cluster environment.

    In this scenario is it ok if I perform a system copy with backup/restore.

    Please suggest.

    Regards,

    Raj

    DB:2.51:Hardware Migration From Non-Cluster To Cluster Environment jp


    Hi Ratnajit,

    Backup/restore method is more efficient if there's no OS/DB migration issue.

    Best regards,

    Orkun Gedik

  • RELEVANCY SCORE 2.51

    DB:2.51:Ha - Hosts With San And Local Vmfs Volumes 13



    I have 4 ESX u2 hosts in VSphere DRS cluster and would like to enable HA.

    The hosts all have VM's on DAS and share SAN volumes.

    The VM's on the DAS are non critical test servers which will not need failover in the event of a host failure, however the SAN VM's will.

    My question is:

    How will HA react in this environment ? will it failover what it can (the SAN based VM's) and ignore servers on DAS?

    Havn't got any spare hardware to test at the moment

    Thanks in advance

    DB:2.51:Ha - Hosts With San And Local Vmfs Volumes 13


    HA has what is called a "compatibility list" this list contains all the "host - vm" combinations in terms of where a VM could be restarted. In the case of the VMs on DAS that would be a short list

  • RELEVANCY SCORE 2.51

    DB:2.51:Migration 2 Productive Nodes Into A Cluster 3c


    I have currently a system with two independent nodes which i should reconfigure into a cluster. One of this nodes is productive, providing HTTP, SSH and NFS Services. This Node is productive, and i have only 'short' maintenance windows (2 hours on Sunday from 00:00 ;-( ).
    Shared storage on a SAN, Interconnects, network interfaces, services addresses for all 3 services are all ready, service addresses and SAN are aleady used.

    So the basic steps would be:
    - Setup the second node as a cluster node
    - Move the services to this node as 'clustered services' in resource groups
    - Setup the first node as a cluster node in the same cluster

    In the end my question is simple, is it better to set up a one node cluster, migrate the services and add then one more node to this cluster, or should I setup the cluster as 2 node cluster from the beginning, but only install the second node after I have moved the services to the first node. This would mean I have to run the two node cluster without its second node for at least a week.
    This is not a question of availability as i am running this services on a single node now anyway, but i don't know if this really works.

    My main goal is to minimize the work and risk I have to perform in the maintenance windows, and not to touch the system which has the services running.

    Fritz

    DB:2.51:Migration 2 Productive Nodes Into A Cluster 3c

    As you need to disable the installmode on the first clusternode to setup the RGs without the second cluster node up and running, the only senseful way is to start with a one node cluster.

    Regrads, Marc

  • RELEVANCY SCORE 2.51

    DB:2.51:(Urgent)!!!! Sqlserver 2005 Cluster San Migration p9


    Current scenario: We have 2 Node Active/Active SQLServer 2005 cluster. We have DAS instead of SAN. Now we are implementing SAN in our infrastructure SQL environment. What is the best way to move SQLServer 2005 cluster databases from DAS to new SAN? What
    is best way to move Cluster quorum from DAS to new SAN?
    Urgent help is much appreciated.

    DB:2.51:(Urgent)!!!! Sqlserver 2005 Cluster San Migration p9


    Hi,

    Please discuss this issue in our SQL Server Disaster Recovery and Availability forum:

    http://social.msdn.microsoft.com/Forums/en/sqldisasterrecovery/threads

    If it is urgent, you can contact Microsoft Customer Support Service (CSS) for direct assistance.

    To obtain the phone numbers for specific technology request, please refer to the website listed below:
    http://support.microsoft.com/default.aspx?scid=fh;EN-US;PHONENUMBERS

    If you are outside the US, please refer to
    http://support.microsoft.com for regional support phone numbers.

    Tim Quan

  • RELEVANCY SCORE 2.50

    DB:2.50:Thread: Expanding Cluster Volume jj


    What is the correct procedure for adding disk space to an existing volume.

    The disk space has been presented on the SAN side. Do I just use NSSMU to

    expand the pool as I would in a non clustered environment?

    DB:2.50:Thread: Expanding Cluster Volume jj

    And that is supposed to reassure me?

    Tim

    ___________________

    Tim Heywood (SYSOP)

    NDS8

    Scotland

    (God\'s Country)

    www.nds8.co.uk

    ___________________

    In theory, practice and theory are the same

    In Practice, they are different

  • RELEVANCY SCORE 2.50

    DB:2.50:Cambiar Fuente Predeterminada A Comic San df


    quiero cambiar la fuente pred. a comic sans algien podria decirme como hacerlo desde ya muchas gracias.

    DB:2.50:Cambiar Fuente Predeterminada A Comic San df

    Hola, Aquí no se responden este tipo de dudas, si es en W7 como ya comentas en otro hilo mira aquí: http://social.msdn.microsoft.com/Forums/es-ES/sugerenciases/thread/8db5bce6-3fa6-4501-9dea-1a0d28acdfffUn Saludo
    Fran Díaz | {geeks.ms/blogs/fdiaz/} | {onobanet.com} | {secondnug.com}

  • RELEVANCY SCORE 2.50

    DB:2.50:Thread: Clustering Configuration ja


    Hello,

    I am currently want to configure a cluster but I am new to setup this

    environment in Netware. Is there any documentation online where I can read?

    I searched on Novell offically website, but unfortunately there exists not

    in there, but there only exists the idea about clustering and SAN.

    With many thanks,

    Meiji

    DB:2.50:Thread: Clustering Configuration ja

    You didn\'t say what you are planning on clustering.

    The documentation at the following url is pretty decent.

    http://www.novell.com/documentation/ncs65/index.html

    After you read the above docs and have questions (everyone does),

    please ask.

    -' + '-

    Timothy Leerhoff

    Novell Support Forum Sysop

    When your memorys become greater than your dreams, you are a dead

    person looking for a grave.

  • RELEVANCY SCORE 2.50

    DB:2.50:Cluster Nodes Rebooted After San Maintanence k1


    We were performing some maintenence onone ofthe nodesin ourHP P4800 SAN cluster, the volumes all have network RAID.

    I have 1 x 6 node hyper V cluster with MPIO enabled to all volumes we have our exchange 2010 environment in VM'sdeployed on this cluster.

    When we began the maintenence on the node obviously one path to the node was lost (as you would expect) but what appeared to happen was the cluster nodes all rebooted roughly 10 minutes after theSAN Node was taken offline?Has anyone had this
    problem or know of it?Cheers Richard

    DB:2.50:Cluster Nodes Rebooted After San Maintanence k1

    Most probably the problem would be with the failover of SAN cluster. Could you double check the failover scenario (preferably a DR test)for the configurationThank you, Shani

  • RELEVANCY SCORE 2.50

    DB:2.50:Using Jboss Cache In A 2 Node Cluster fp



    I have a configuration with a cluster consisting of two nodes. Do you suggest using JBoss cache in such environment against using EHCache that seems to perform better than Jboss Cache in such clusters?

    DB:2.50:Using Jboss Cache In A 2 Node Cluster fp


    I have a configuration with a cluster consisting of two nodes. Do you suggest using JBoss cache in such environment against using EHCache that seems to perform better than Jboss Cache in such clusters?

  • RELEVANCY SCORE 2.50

    DB:2.50:Any Fix Available For Storage Test On Fail-Over Cluster In San Environment ( Kb2914974 ) 9k


    Microsoft KB article kb2914974 which clearly explains Storage tests on a failover MSCS cluster may not discover all shared LUNs in case of multi-site storage area network (SAN) is configured to have site-to-site mirroring.
    will the above article applies to windows 2008 server or only windows 2012 server.
    If any fix is available is available please provide the download link or simple we need to follow resolution provided in the article by skipping the storage-validation.

    DB:2.50:Any Fix Available For Storage Test On Fail-Over Cluster In San Environment ( Kb2914974 ) 9k

    Microsoft KB article kb2914974 which clearly explains Storage tests on a failover MSCS cluster may not discover all shared LUNs in case of multi-site storage area network (SAN) is configured to have site-to-site mirroring.
    will the above article applies to windows 2008 server or only windows 2012 server.
    If any fix is available is available please provide the download link or simple we need to follow resolution provided in the article by skipping the storage-validation.

  • RELEVANCY SCORE 2.50

    DB:2.50:Thread: Cluster With Dual Replicated San Units m3


    Hi

    Hopefully someone can give me a few pointers!

    I have two buildings joined by fibre.

    I\'d like to create a cluster between the two nodes in each building.

    2 Cluster nodes and IBM SAN Unit in one building

    2 cluster nodes and SAN Unit in another building

    Using FalconStor to replicate data (with dedicated fibre links) from IBM

    SAN unit to the SAN Unit in the other building. The two nodes in the other

    building would be setup to use the SAN Unit and hopefully they\'ll all be in

    the same cluster.

    Would this work?

    At the moment we have 3 nodes and the IBM SAN in one building.

    Thanks

    Simon Marriott

    DB:2.50:Thread: Cluster With Dual Replicated San Units m3

    On Sun, 09 Jul 2006 06:56:04 -0700, Timothy Leerhoff

    tleerhoffNO@SPAMqwest.net wrote:

    Take a look at Business Continuity Clustering (BCC).

    Note: BCC seems to use 2 clusters, and to require some appropriate tricks

    to have the array connections go from secondary to primary. Mr Beels,

    IIRC, recently posted some good descriptions, but was working with a

    different array vendor (EMC, I think).

    Also, BCC 1.1 will provide support for the array tricks *inside* BCC; BCC

    1.0 still requires the tricks to be done externally. BCC 1.1 is

    appearently in closed beta right about now.

    /dps

    -' + '-

    Using Opera\'s revolutionary e-mail client: http://www.opera.com/m2/

  • RELEVANCY SCORE 2.50

    DB:2.50:Reconnect Existing Datastore After Fresh Esx 4.0 Installation kx


    Hi All,

    So, i'm interested in setting up the new vsphere, but I have some burning question on the installation itself due to my current environment setup.

    Current environment

    - 2 cluster host (running on old IBM server) installed with ESX 3.5

    - DS3000 SAN storage (with 600GB LUN as the ESX 3.5 datastore)

    - VC 2.5 but running as a virtual machine in the ESX environment

    My company is doing tech refresh and will be trading in the old IBM servers for newer models. So I basically can't perform any in-place upgrade.

    Should I plan to install vSphere on the 2 new servers and VC on separate hardware, can I reconnect my current LUN (which has current ESX 3.5 VMFS datastore) to the new vSphere cluster and re-scan the storage? Or should I do some data migration first to external disk then import back all the VMX and VMDK file after I create a new VMFS LUN for the vSphere?

    Kindly advise since I can't seem to find such scenario in the installation guide. Thank you very much.

    DB:2.50:Reconnect Existing Datastore After Fresh Esx 4.0 Installation kx


    Here's what I would do. Keeping your old and new servers online until you are done.

    1. Upgrade your virtual center to 4.0

    2. Install ESX 4 onto your new hardware and attach them to your 4.0 virtual center.

    3. Attach your LUN's to the new ESX 4 servers

    4. Migrate your virtual center to the new ESX 4 servers

    5. Migrate any other VM's to your new ESX 4 servers

    6. Upgrade VMware tools on all the VM's

    7. Upgrade the virtual hardware on all the VM's

    No upgrades have been done to VMFS, so you don't need to worry about that. When you're doing your install on your new servers, I would make sure your SAN is not connected to them to limit the chance of distroying your datastore.

  • RELEVANCY SCORE 2.50

    DB:2.50:Attaching Volumes To A Sql Cluster 2005 d3


    We have our production SQL 2005 server (64 bit Standard Ed.) attached to an iSCSI Equalogic SAN. We have set up 2 new servers and installed the cluster service. My question is: can I install SQL 2005 in this new cluster environment an latter on disconnect the data and log volumes from the production server, attach those volumes to the cluster an reattach the DBs? the reason we need to do it like that is that we don't have enough spare space in the SAN to initially create these 2 volumes in the cluster.
     
    Any ideas/suggestions would be greatly appreciated.

    DB:2.50:Attaching Volumes To A Sql Cluster 2005 d3

    Yes, but it is not quite as simple as that.  Your best bet is to detach the databases (user databases ONLY).  Stop SQL, and remove the disk dependencies.  You can then connect the volumes to the new cluster, create clustered resources from the disks, set the SQL Service to be dependent on the disks (obviously they must be in the same resource group) and then attach the databases.
     
    If you don't understand these instructions, then you need more clustering experience to pull this trick off.  Not a jab or anything, just an observation on your likelihood of success.
     
     
     

  • RELEVANCY SCORE 2.50

    DB:2.50:I/O Considerations When Making Vm Revert To Last Snapshot Prior To Mass Bootup? cd



    We have a Vsphere 4 environment with 8 esxi hosts per cluster. In this cluster we have one 512GB datastore that holds all the linked-clones and master vms. This is of course a test environment....

    We have a need to boot 30 vms at a time to perform automated testing. In the past these 30 vms would boot up fast and easy without issues. We recently started telling the vms (through vmware API) to revert the vms to the last snapshot prior to booting. This has made the vm boot process EXTREMELY longer and is causing us issues.

    So, I am guessing that reverting to the last snapshot is whats is causing us the issue? My questions are:

    1) Is that correct it is the reverting snapshot causing it?

    2) What indeed happens when you boot a vm and it loses any changes it has when last on and just reverts?

    3) Is this just killing our I/O? What else is this doing?

    If this is an I/O SAN issue then I can look at balancing out our LUN load, etc but first I need to know whats really happening under the hood to properly address this. Is there another method of meeting our ultimate goal without causing such performance problems?

    DB:2.50:I/O Considerations When Making Vm Revert To Last Snapshot Prior To Mass Bootup? cd


    Yes, that's what I would say. Spread out your snapshot VM's across different datastores as much as possible, that may help with your IO distribution.

    Failing that, I don't see much else you can do. Given your criteria for the simpler approach and scripts, I see your point about keeping things simple and unified. I wish I had a better solution for you.

  • RELEVANCY SCORE 2.50

    DB:2.50:Thread: Cluster Down To Increase Pool Size?? 9a


    Hi,

    we lately found a TID 10057924 in Novells Knowledgebase saying that we

    have to down the whole cluster to increase the pool size (exactly how

    to add free space to an existing cluster enabled volume).

    Until now we did this without even deactivating the pool.

    Do we have to worry now about the health of our increased pools?

    Why is Novell proposing to shut the whole cluster down to perform

    cluster operations like adding free space to cluster volumes or

    deleting cluster volumes?

    (We have a 4 node cluster NW6.5, NCS 1.7, EMC CX600 SAN)

    B. Kuhlmann

    University of Hamburg,

    Germany

    DB:2.50:Thread: Cluster Down To Increase Pool Size?? 9a

    bekuhlmann@yahoo.com wrote:

    we lately found a TID 10057924 in Novells Knowledgebase saying that we

    have to down the whole cluster to increase the pool size (exactly how

    to add free space to an existing cluster enabled volume).

    That document dates from the year 2000 and seems to originally having been

    written for NetWare 5 clusters. Since NetWare 6, Novell has much enhanced

    the NSS functionality and made shared storage much safer and I don\'t think

    these instructions still apply to NetWare 6.x clusters. What\'s especially

    revealing is that the document talks about expanding volumes and not

    expanding pools. On NetWare 6.x, you don\'t expand volumes but pools. So

    IMHO, this document simply does not apply when it comes to creating or

    expanding clustered storage.

    The cluster documentation on creating clustered storage does in no place

    mention that only one node should be active:

    http://www.novell.com/documentation/.../h2mdblj1.html

    If this were really important, it should at least be mentioned at this

    place int he documentation

    Until now we did this without even deactivating the pool.

    Do we have to worry now about the health of our increased pools?

    Why is Novell proposing to shut the whole cluster down to perform

    cluster operations like adding free space to cluster volumes or

    deleting cluster volumes?

    I have a couple of clusters myself and always did storage configuration

    with the cluster fully active. I never had problems with that.

    -' + '-

    Marcel Cox (using XanaNews 1.18.1.3)

  • RELEVANCY SCORE 2.50

    DB:2.50:Hyper V Lab And Live Migration fp


    Hi Guys,
    I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations.
    The problem I have is my shared storage is a bit of a cheat as I have one disk assigned in each host and each host has starwinds virtual SAN installed. The hostA has an iscsi connection to hostB storage and visa versa.
    The issue this causes is when the hosts shutdown (because this is a lab its only on when required), the cluster is in a mess when it starts up eg VMs missing etc. I can recover from it but it takes time. I tinkered with the HA settings and the VM settings
    so they restarted/ didnt restart etc but with no success.
    My question is can I use something like SMB3 shared storage on one of the hosts to perform live migrations but without a full on cluster? I know I can do Shared Nothing Live Migrations but this takes time.
    Any ideas on a better solution (rather than actually buying proper shared storage ;-) ) Or if shared storage is the only option to do this cleanly, what would people recommend bearing in mind I have SSDs in the hyper V hosts.
    Hope all that makes sense

    DB:2.50:Hyper V Lab And Live Migration fp

    Hi Sir,
    I have 2 Hyper V hosts I am setting up in a lab environment. Initially, I successfully setup a 2 node cluster with CSVs which allowed me to do Live Migrations.
    As you mentioned , you have 2 hyper-v host and use starwind to provide ISCSI target (this is sameas my first lab environment ) , then I realized that I needone or morehost to simulate more productionscenario .
    But ,if you have more physical computers you maytry other's progects .
    Also please refer to this thread :
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/e9f81a9e-0d50-4bca-a24d-608a4cce20e8/2012-r2-hyperv-cluster-smb-30-sofs-share-permissions-issues?forum=winserverhyperv
    Best Regards
    Elton Ji

    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • RELEVANCY SCORE 2.49

    DB:2.49:Sccm San Device Swap Out d8


    Hi All,
    The SCCM mapped drives that are connecting to theSAN Drivesis going to be upgraded and moved to a newer SAN device.As SCCM and SQL are mapped to those drives.This is the possible scenriao i will be testing in a Test environment before doing it
    in Production.
    1) Shutdown all SCCM and SQL Services
    2) SCCM and SQL Data copy to new SAN Device
    3) Once copy complete.Drives will be mapped to new device.Ensure data is present on the SAN Device.
    4) Start all SCCM And SQL Services.
    As i have 1 central and 3 primary servers to perform this work on.couple of questions
    1) CanIaction the central and primaries at a go or do i need to perform one by one ?
    2) Best practice to shutdown all services and possible error checks when the service all start up.
    Thanks in Advance

    DB:2.49:Sccm San Device Swap Out d8

    Hi Robinson,
    Don't think this will be necessary as the data will be moved to another device but the drive letter is still the same.Will be remapped to the same Drive letter.
    thanks for the document anyway.
    Is there any checklist that can be used to verify SCCM Health after the SCCM Services come online ?
    I would be checkingevernt Viewer as well as theinboxes and the SMS logs.

  • RELEVANCY SCORE 2.49

    DB:2.49:Migrate San Disk - Windows 2008 Failover Cluster sk


    I have a 2 node Windows 2008 64 bit cluster configured...being used a SQL 2008 cluster I have a need to migrate the existing allocated SAN disk to a new SAN. Can you point me in the right direction on how to do this?

    DB:2.49:Migrate San Disk - Windows 2008 Failover Cluster sk

    Hi,
     
    I suggest discussing this issue in our SQL Server Disaster Recovery and Availability forum:
     
    http://social.msdn.microsoft.com/Forums/en/sqldisasterrecovery/threads
     
    Tim Quan - MSFT
     

  • RELEVANCY SCORE 2.49

    DB:2.49:Windows Server 2012 Hyper-V - Live Migrate From Cluster Host To Standalone Host sf


    I have 4 Hyper-v server, HYPERV1,HYPERV2 and HYPERV3 are in a Cluster on a SAN, HYPERV4 is a standalone server. all running Windows Server 2012 standard.
    Isit possible to Live Migrate VM from the Cluster to the standalone server ? i can perform replication, but not live migration. when i select live migrate, only server in the cluster are avalible for selection

    DB:2.49:Windows Server 2012 Hyper-V - Live Migrate From Cluster Host To Standalone Host sf

    Found the soultion myself, thans a lot Adrian give me the hints !
    You have to remove the VM from the Cluster Manager (not delete from VM Manager). so that the VM no longer in HA mode. So that the VM will not appear in the Cluster Manager, you can only see it in the VM manager, then u can perform Live Migration from Cluster
    to Standalone Server.
    PS. if the CPU is different, u have to enableMigrate to a physical computer with different processor version of the VM

  • RELEVANCY SCORE 2.49

    DB:2.49:Exchange 2007 Ccr On Windows 2008 Cluster Fails To Install j8


    All, I've made a bone-head move and incorrectly removed a CCR cluster from my environment (no production mailboxes on it) for some SAN maintenance.  I know I should have moved the role to the passive node and then back while maintaining the cluster now... but I am past that point.  When I try to install an Active Node in a CCR cluster I get The operation could not be performed because object 'CCR CLUSTER NAME' could not be found on domain controller 'domain fqdn' Any assistance would be appreciated greatly. Thanks!

    DB:2.49:Exchange 2007 Ccr On Windows 2008 Cluster Fails To Install j8

    kavin ,What you have done that i understand is that you have now only passive node remain ,so by using this tutorials you have to move-cms to this passive node and convert it into a active node and then you install the passive node.Just go through it ,http://technet.microsoft.com/en-us/library/bb691138.aspxin this all errors specifications are also there .http://technet.microsoft.com/en-us/library/bb124710.aspxRegards Syed Arsalan .syed.arsalan

  • RELEVANCY SCORE 2.49

    DB:2.49:Clone Rac Rdbms 1j


    Hi Everyone,

    We have our first test RAC for Oracle E Business Applications running, we will need to setup the 2nd test environment of EBS on RAC on the same set of servers of first RAC environment.

    The second RAC environment is not the clone of the first RAC environment, we will perform the same clone steps from our non-RAC EBS prod environment, just like the way we did the first RAC environment.

    We will need to setup the 2nd test environment under the same cluster and ASM structure of 1st RAC environment.

    My puzzle is: How to clone 2nd pair of RAC RDBMS under the same cluster of 1st RAC environment ?

    As long as we could clone the 2nd pair of RAC RDBMS, then we could perform the same steps in notes 466649.1, to setup 2nd EBS RAC environment under the same cluster.

    Please outline the basic steps to achieve this goal.

    Thanks

    DB:2.49:Clone Rac Rdbms 1j

    Please refer to these docs.

    Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone [ID 559518.1]
    Cloning Oracle Applications Release 12 with Rapid Clone [ID 406982.1]
    Certified RAC Scenarios for E-Business Suite Cloning [ID 783188.1]
    Rapid Clone Documentation Resources, Release 11i and 12 [ID 799735.1]

    Thanks,
    Hussein

  • RELEVANCY SCORE 2.49

    DB:2.49:Cluster Storage Only Migration sj


    Hi, my current cluster is using storage A. We have planned to migrate the storage A to storage B while retaining the cluster environment. Is this possible by doing a D2D for data in the storage? What are the impacts on the production environment and the cluster itself? Is re-installation of the cluster needed in this case to point it back to the new storage B?*storage A and storage B are both SAN storage.

  • RELEVANCY SCORE 2.48

    DB:2.48:How To Perform Cluster Environment In Proactivenet 8.0 ? mj



    Hi all,

    I have installed ProactiveNet 8.0 on Windows 2003 server. Now, I need to create a clustered environment in ProactiveNet Performance Management for failover concept. So, How can I cluster ProactiveAgents? Is any documentation for it?

    DB:2.48:How To Perform Cluster Environment In Proactivenet 8.0 ? mj


    Hi Lokesh,

    According to pages 64-70 and pages 157-159 in the BMC ProactiveNet Installation Guide 8.0 located at http://documents.bmc.com/supportu/documents/77/37/107737/107737.pdf, it stated the following on how to install ProactiveNet Server version 8.0.00 on a Windows cluster environment:

    =============================================

    On page 64 (page 76 in Adobe Reader), it stated the following:

    The BMC ProactiveNet Server installation will be done with logical IP/hostname enabled on the Primary node. The BMC ProactiveNet Server should be installed on the shared disk.

    =============================================

    On page 67 (page 79 in Adobe Reader), it stated the following:

    NOTE

    You cannot install BMC ProactiveNet Server in a root directory. For example c:\ or D:\. Do not install the BMC ProactiveNet Server on shared drives. This can create a conflict in registry entries. Create a separate folder under any of the system folders to install the BMC ProactiveNet Server (target directory). Installing directly under Documents and Settings folder may lead to deletion of this folder while uninstalling.

    =============================================

    On page 158 (page 170 in Adobe Reader), it stated the following:

    Installation procedure

    - Identify one computer as primary and the other as standby from the cluster.

    - BMC ProactiveNet needs to be first installed on the primary computer with /primary switch (setup.exe failover primary), in a directory on the quorum disk.

    - After successful installation on the primary computer, make sure that the server is up and running.

    - Then, install the server on the standby computer with /standby switch (setup.exe failover standby), into the same directory on the quorum disk

    =============================================

    From the above information, there is some conflicting information. Page 64 stated ProactiveNet Server should be installed on a shared disk, but page 67 stated ProactiveNet Server should not be installed on a shared drive. On page 158, it stated to install ProactiveNet Server on the primary server in a quorum disk and then install ProactiveNet Server on the standby server in the same directory on the quorum disk.

    The statement on the page 67 (Do not install BMC Proactivenet server on shared drives) is incorrect. The statement should be "Do not install BMC Proactivenet server on a shared directory". This is a documentation error and will be corrected.

    Also the "setup.exe high-availability primary" and "Setup.exe high-availability standby" commands to install ProactiveNet Server 8.0.00 on the primary and standby cluster nodes are incorrect. This is a documentation error and will be corrected.

    So the questions are: Should ProactiveNet Server 8.0.00 be installed on each Windows cluster node and install the common files (e.g. configuration files, etc.) on a shared drive? Or should ProactiveNet Server 8.0.00 and the common files be installed on a shared drive?

    Below are the steps to install the ProactiveNet Server 8.0.00 on a Windows cluster environment:

    Note: There should be a shared drive (e.g. SAN disk) for the two Windows cluster nodes. The "shared drive" is not referring to "shared directory".

    Definitions:

    Shared directory - A directory present in one system and is shared between two or more users.

    Shared Drive - A drive that is mapped to both Windows cluster nodes (e.g. SAN disk)

    Step 1) Install ProactiveNet Server 8.0.00 on the primary Windows cluster node using the following command:

    setup.exe failover primary

    and continue with the rest of the installation instructions in the BMC ProactiveNet Installation Guide 8.0.

    Step 2) Install ProactiveNet Server 8.0.00 on the standby Windows cluster node using the following command:

    setup.exe failover standby

    and continue with the rest of the installation instructions in the BMC ProactiveNet Installation Guide 8.0.

    As the drive is shared, the command files will be installed on the shared drive so it is accessible in both Windows cluster nodes.

    Thanks

    Manohar

  • RELEVANCY SCORE 2.48

    DB:2.48:Microsoft Cluster With Vmware Workstation 6.0.2 j7


    Hi,

    I'm trying to setup a microsoft cluster with a shared quorum disk (Q), i create the disk as a second disk on the first node, but the second node won't boot because the virtual disk is locked by the first node; has anyone tried this out ?

    It's all local and we have no SAN lun's to present to 2 nodes as we can do in a ESX environment.

    Thanks

    DB:2.48:Microsoft Cluster With Vmware Workstation 6.0.2 j7

    I think I got it working:

    http://invurted.com/2008/05/vmware-and-windows-2003-clustering-post-version-4/

    I didn't have a requirement to install anything on it (SQL etc), but it looked to work in Workstation with Windows 2000 Clustering Services.

  • RELEVANCY SCORE 2.48

    DB:2.48:Clustering Vss 78


    We have a clustered SQL environment - two servers, Windows 2003 R2, 64 bit SQL 2005 Enterprise, 64 bit.  Storage is provided via a Xiotech SAN in an FC fabric.Are there any problems to watch for using a VSS aware backup application in this environment?  What about VSS snapshots on the active cluster member?Thanks for any input advice.Frank HughesStrategic Data ManagementFishers, IN

    DB:2.48:Clustering Vss 78

    We have a clustered SQL environment - two servers, Windows 2003 R2, 64 bit SQL 2005 Enterprise, 64 bit.  Storage is provided via a Xiotech SAN in an FC fabric.Are there any problems to watch for using a VSS aware backup application in this environment?  What about VSS snapshots on the active cluster member?Thanks for any input advice.Frank HughesStrategic Data ManagementFishers, IN

  • RELEVANCY SCORE 2.48

    DB:2.48:Thread: Start Over-Cluster a8


    We\'ve had a cluster up for approx 2.6 years. NetWare 6, SP. running cluster

    version 1.60.4.

    We set it up in our immaturity and it\'s never really worked correctly for

    failover of NDPS, GroupWise, logins etc.

    We\'ve never really been able to do much due to it being our main production

    server, the typical reasons. We now have a new SAN installed and have

    opportunity to move from a shared SCSI setup to the SAN for our NetWare

    cluster. But we don\'t want to just add volumes to this troublesome cluster,

    we want to re-do the cluster correctly so failover works correctly and we

    can add more servers for a true clustered environment.

    So, is there a method for removing cluster services without losing the

    volumes, operating in a non clustered environment until we get the SAN

    setup, then setting up a new cluster, correctly, and transferring the data?

    I\'m looking for a TID or white paper on the removal process if possible.

    Thanks.

    Stewart

    DB:2.48:Thread: Start Over-Cluster a8

    http://support.novell.com/cgi-bin/se...i?10015339.htm is

    the tid on removing clustering. You\'ll want to stop after modifying

    the autoexec.ncf since you\'ll want the files for later.

    -' + '-

    Cheers!

    Richard Beels

    ~ Network Consultant

    ~ Sysop, Novell Support Connection

    ~ MCNE, CNE*, CNA*, CNS*, N*LS

  • RELEVANCY SCORE 2.48

    DB:2.48:San (Scsi Reservation) Is Possible Under Linux? f9



    I wish to emulate a SAN system with rocksclusters linux distribution (www.rocksclusters.org); is it possible to share scsi disk in a linux cluster environment? The distribution is based on Centos (Red Hat)..

    I hope you reply...thanks

    DB:2.48:San (Scsi Reservation) Is Possible Under Linux? f9


    In theory it should work. I was going to test this myself.

    DB

  • RELEVANCY SCORE 2.48

    DB:2.48:Configure Enterprise Manager In An Cluster Environment f3


    Hi,

    I have a Oracle Database 10g Enterprise configured in a Red Hat Cluster environment in an active/passive automatic failover topology.

    I need to configure Enterprise Manager in this database.

    Can anyone help wiht the steps to perform it. Can it be done with the 'emca' by just giving with some means the virtual IP address of the Oracle Service in the cluster.

    Best Regard,

    Drini

    DB:2.48:Configure Enterprise Manager In An Cluster Environment f3

    Start reading
    Using RepManager Utility
    Oracle Enterprise Manager Grid Control Advanced Installation and Configuration Guide
    *11g Release 1 (11.1.0.1.0)*

    http://download.oracle.com/docs/cd/E11857_01/install.111/e16847/appdx_repmgr.htm#CFAGHEJF

    Regards
    Rob
    http://oemgc.wordpress.com

  • RELEVANCY SCORE 2.48

    DB:2.48:Thread: Information About Working Hsm Solutions c8


    We are in need of a HSM solution and I need input from people that have

    that installed and working.

    As I understand there are 3 players in this market for NetWare, Caminosoft,

    Moonwalk and Knozall.

    After some reading I think Caminosoft Managed Server HSM is the one that

    works best for us, but input from users of the others are also welcome.

    The environment we have is a 2-node cluster with SAN attached storage and

    we want to have the archive disks in the SAN and mounted on the cluster

    servers.

    Thanks in advance

    Bobo

    DB:2.48:Thread: Information About Working Hsm Solutions c8

    We are presently using CaminoSoft\'s HSM software to archive. For the most part it works great. The olny major problem is with traditional volumes (memory leak, and problems backing up stub files).

    Novell is working on those problems with traditional volumes. It works like a charm on NSS volumes though.

  • RELEVANCY SCORE 2.47

    DB:2.47:Cluster San Migration kj


    Since its first trial On Cluster SAN Migration, I Want KNow what step should be taken to Replace Old SAN With New One.
    I Have Following Things On Currently Running Cluster SAN.
    1) Quoram
    2) MSDTC
    3) SQL Database Log
    4) SQL Database Date File

    thanks In Advance

    DB:2.47:Cluster San Migration kj

    This forum is devoted to System Center Virtual Machine Manager. Please redirect your question to one of the cluster forums below for the best assistance.

    http://social.technet.microsoft.com/Forums/en-US/winserverClustering/threads
    http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2highavailability/threads

    Thanks,

    Mike Briggs [MSFT]

    ** This information provided AS IS with no warranties, and confers no rights **

  • RELEVANCY SCORE 2.47

    DB:2.47:San Copy Alternative For Hyper-V Cluster 8a



    Dears

    Would appreciate if somebody could answer the below queries

    In our environment we have the AX4 5i and it is connected to a Hyper-V cluster (2 Windows 2008 nodes) running Exchange ,SQL and Sharepoint VM's We are installing the VNX 5100 with the intention of migrating the old data from AX4 to VNX But the problem is that I cannot perform the online method due to

    - No licenses were purchased for Unisphere Manager and SAN Copy for AX4

    - No free space on AX4 to create the RLP on AX4

    Again on Pull Method

    - The Duration taken was almost 3- 4 days whereas which the duration exceed the planned downtime.

    Hence can anybody advice whether we could opt for a Host based Migration Technique and if so what are the licenses options involved?

    Regards

    Muralee

    DB:2.47:San Copy Alternative For Hyper-V Cluster 8a

    Muraleedaran wrote:

    Can I do the following

    - Present the new LUN's to the cluster servers

    - Copy the data from old lun's to new luns' from windows explorer.

    not with VMs running you can't. If your PowerPath is licensed, you are eligible for the free license for PPME (PowerPath Migration Enabler). One caviat to migrating MCSC clusters without downtime is you have to run PowerPath 5.7SP1.

  • RELEVANCY SCORE 2.47

    DB:2.47:Move Tfs 2005 To Different Sql Server 83


    DZimmy wrote in article Re: Is Migrating from a non-clustered data tier to a clustered data-tier supported? about moving non-cluster to cluster. I will be needing to do something similar.
     
    Currently I have TFS 2005 on a SQL cluster. That cluster need radical repairs. We want to move the TFS temporarily to a single SQL server, rebuild our cluster hardware, and then move the TFS back to the original SQL cluster. The DB is on a SAN.
     
    What commands are required to perform this sort of action? We are in the process of building out the single SQL server now.
     
    Once the SQL cluster is rebuilt I can use dzimmy’s article to move from non-cluster server back to our cluster.
     
    Thank you for your assistance.
     
    KAclin

    DB:2.47:Move Tfs 2005 To Different Sql Server 83

    Unfortunately, for TFS 2005 the steps for moving just the data tier aren't broken out into a separate document.  The most applicable document assumes you're moving both the AT and DT.
     
    http://msdn.microsoft.com/en-us/library/ms404869(VS.80).aspx
     
    The (relatively) good news is we have a TFS 2008 document we'll be releasing with SP1 that covers exactly this case and is much smaller and simpler.  I'm guessing you can't wait for that, though, so the above link is the best we have for now.

  • RELEVANCY SCORE 2.47

    DB:2.47:Obiee Data Cache On San Disks pk


    Hi,

    In our environment we have installed OBIEE 10.1.3.4 (BiServer) on two windows servers(primary and secondary) and these servers not clustered using Microsoft or any other cluster software .

    We are planning to use SAN drive to hold the shared cache, now we are facing a challenge in configuring the SAN to see from both OBIEE servers, is it possible to use the SAN with out installing any cluster between these windows servers.

    Can anyone confirm whether we need to install any software to see the same SAN drive from both windows servers.

    Your help is greatly appreciated.

    Thanks

    DB:2.47:Obiee Data Cache On San Disks pk

    is it possible to use the SAN with out installing any cluster between these windows servers. That's a question for a storage expert, nothing to do with OBIEE. In general you simply create a network share somewhere. On Windows, the BI services must run under a domain account in order to access network shares. Do not use the LocalSystem account.

  • RELEVANCY SCORE 2.47

    DB:2.47:San Copy Migration Of A Red Hat Cluster That's Using Gfs p3



    I am doing several data migrations using SAN Copy. On the Red Hat Linux systems (these are mostly all 5.8), I am using the following procedure:

    1. Initial SAN Copy

    2. unmount SAN filesystems

    3. vgchange -an VG

    4. vgexport VG

    5. Comment out FS in /etc/fstab

    6. Shutdown host

    7. Incremental SAN Copy

    8. Boot host

    9. pvscan

    10. vgimport VG

    11. Uncomment /etc/fstab

    12. Mount filesystem

    This works fine on a single node. However, I am wondering if this would work in a clustered environment. I'm thinking I would shut down the applications then shut down the cluster between steps 1 and 2 above. Then startup the cluster and then bring up the apps after step 10.

    Any thoughts on this?

    Thanks

    DB:2.47:San Copy Migration Of A Red Hat Cluster That's Using Gfs p3


    Well, answered my own question but I thought I'd post an answer here. Part of my procedure was taken from a Red Hat article here: http://www.redhat.com/archives/linux-cluster/2011-June/msg00012.html. All of this is very dependent on what version of RedHat cluster services you are using.

    One of the concerns was about the quorum drive, and whether the Linux cluster would recognize the SAN Copy version of this device. So I decided to create a new quorum drive on the new array, rather than migrate it. It is possible the SAN Copy version would be recognized, but I didn't want to leave that to chance.

    The only filesystems in this cluster were the GFS2 cluster filesystems, which are NOT mounted via /etc/fstab. Performing the vgexport/vgimport prior to shutting down the host wasn't necessary, or even possible really.

    The cluster deactivates the volume groups when you shut down all the cluster servces. So you can't vgexport the volume groups manually. As it turns out, at least when using LVM2, LVM recognizes the SANCopy volumes as the original volumes just fine.

    So the procedure that worked for me was:

    1. Initial SAN Copy (all volumes except for the Quorum)
    2. Create new device for the quorum on target array, present it to cluster hosts
    3. Shutdown cluster apps and all cluster services
    4. Mark new device as quorum device using mkqdisk (mkqdisk -L to verify)
    5. Edit cluster.conf to point to new quorum device
    6. Distribute cluster.conf to all nodes
    7. Bring up cluster, test. Clustat will show new quorum device.
    8. Shutdown cluster and shutdown all nodes
    9. Perform final SAN Copy
    10. Remove hosts from old storage groups
    11. Remove zones to old storage
    12. Add hosts to new storage group for cluster
    13. Boot all nodes
    14. Test