Sun Cluster 3.2 Cheat Sheet



Comments



Description

Sun Cluster 3.2 - Cheat Sheet Page 1 of 12 Sun Cluster Cheat Sheet This cheatsheet contains common commands and information for both Sun Cluster 3.1 and 3.2, there is some missing information and over time I hope to complete this i.e zones, NAS devices, etc Also both versions of Cluster have a text based GUI tool, so don't be afraid to use this, especially if the task is a simple one • scsetup (3.1) • clsetup (3.2) Also all the commands in version 3.1 are available to version 3.2 Daemons and Processes At the bottom of the installation guide I listed the daemons and processing running after a fresh install, now is the time explain what these processes do, I have managed to obtain informtion on most of them but still looking for others. Versions 3.1 and 3.2 clexecd This is used by cluster kernel threads to execute userland commands (such as the run_reserve and dofsck commands). It is also used to run cluster commands remotely (like the cluster shutdown command). This daemon registers with failfastd so that a failfast device driver will panic the kernel if this daemon is killed and not restarted in 30 seconds. This daemon provides access from userland management applications to the CCR. It is automatically restarted if it is stopped. The cluster event daemon registers and forwards cluster events (such as nodes entering and leaving the cluster). There is also a protocol whereby user applications can register themselves to receive cluster events. The daemon is automatically respawned if it is killed. cluster event log daemon logs cluster events into a binary log file. At the time of writing for this course, there is no published interface to this log. It is automatically restarted if it is stopped. This daemon is the failfast proxy server.The failfast daemon allows the kernel to panic if certain essential daemons have failed The resource group management daemon which manages the state of all cluster-unaware applications. A failfast driver panics the kernel if this daemon is killed and not restarted in 30 seconds. This is the fork-and-exec daemon, which handles requests from rgmd to spawn methods for specific data services. A failfast driver panics the kernel if this daemon is killed and not restarted in 30 seconds. This is the process monitoring facility. It is used as a general mechanism to initiate restarts and failure action scripts for some cluster framework daemons (in Solaris 9 OS), and for most application daemons and application fault monitors (in Solaris 9 and10 OS). A failfast driver panics the kernel if this daemon is stopped and not restarted in 30 seconds. cl_ccrad cl_eventd cl_eventlogd failfastd rgmd rpc.fed rpc.pmfd http://www.datadisk.co.uk/html_docs/sun/sun_cluster_cs.htm 9/15/2010 scdpmd Multi-threaded DPM daemon runs on each node.fed scqdmd pnm mod serverd is a daemon for the public network management (PMN) module.2) man pages log files Configuration files (CCR. It is started at boot time and starts the PMN service.<date> /etc/cluster/ccr/infrastructure http://www. It is automatically restarted if it is stopped. this possibly use to be called "scqsd" This daemon serves as a proxy whenever any quorum device activity requires execution of some command in userland i.1 Only /var/cluster/sccheck/report. HDLM. It keeps track of the local host's IPMP state and facilities inter-node failover for all IPMP groups. Automatically restarted by rpc.2 only qd_userd cl_execd ifconfig_proxy_serverd rtreg_proxy_serverd cl_pnmd scprivipd sc_zonesd cznetd rpc.2 . Failfast will hose the box if this is killed and not restarted in 30 seconds The quorum server daemon. This is the "fork and exec" daemin which handles requests from rgmd to spawn methods for specfic data services. This daemon monitors the state of Solaris 10 non-global zones so that applications designed to failover between zones can react appropriately to zone booting failure It is used for reconfiguring and plumbing the private IP address in a local zone after virtual cluster is created. This daemon provisions IP addresses on the clprivnet0 interface. on behalf of zones.Cheat Sheet Page 2 of 12 pnmd Public managment network service daemon manages network status information received from the local IPMP daemon running on each node and facilitates application failovers caused by complete public network failures on nodes.e a NAS quorum device File locations Both Versions (3. etc) Cluster and other commands sccheck logs Cluster infrastructure file /usr/cluster/man /var/cluster/logs /var/adm/messages /etc/cluster/ /usr/cluser/lib/sc Version 3.xml file. Powerpath. Disk path monitoring daemon monitors the status of disk paths.co. etc. also see the cznetd.Sun Cluster 3. so that they can be reported in the output of the cldev status command. Version 3.pmfd if it dies. eventlog. It is automatically restarted if it is stopped. It monitors the availibility of logical path that is visiable through various multipath drivers (MPxIO). It is automatically started by an rc script when a node boots.datadisk.htm 9/15/2010 .uk/html_docs/sun/sun_cluster_cs.1 and 3. datadisk.<date> /etc/cluster/ccr/global/infrastructure /var/cluster/logs/commandlog SCSI Reservations scsi2: /usr/cluster/lib/sc/pgre -c pgre_inkeys -d /dev/did/rdsk/d4s2 Display reservation keys scsi3: /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/d4s2 scsi2: /usr/cluster/lib/sc/pgre -c pgre_inresv -d /dev/did/rdsk/d4s2 determine the device owner scsi3: /usr/cluster/lib/sc/scsi -c inresv -d /dev/did/rdsk/d4s2 Command shortcuts In version 3. I have left the full command name in the rest of the document so it is obvious what we are performing.Cheat Sheet Page 3 of 12 Version 3. all the commands are located in /usr/cluster/bin € cldevice cldevicegroup clinterconnect clnasdevice clquorum clresource clresourcegroup clreslogicalhostname clresourcetype clressharedaddress cldev cldg clintr clnas clq clrs clrg clrslh clrt clrssa shortcut Shutting down and Booting a Cluster http://www.uk/html_docs/sun/sun_cluster_cs.co.2 Only sccheck logs Cluster infrastructure file Command Log /var/cluster/logs/cluster_check/remote.2 there are number of shortcut command names which I have detailed below.htm 9/15/2010 .2 .Sun Cluster 3. uk/html_docs/sun/sun_cluster_cs.datadisk.2 Nodes scstat –n Devices scstat –D Quorum scstat –q Transport info Resources scstat –W scstat –g Resource Groups scsat -g scrgadm -pv clresourcegroup list -v clresourcegroup show clresourcegroup status clresourcetype list -v clresourcetype list-props -v clresourcetype show Resource Types IP Networking Multipathing scstat –i clnode status -m clnode show-rev -v Installation info (prints packages and version) scinstall –pv http://www.htm 9/15/2010 .2 cluster shutdown -g0 -y shutdown entire cluster shutdown single node reboot a node into non-cluster mode scswitch -S -h <host> shutdown -i5 -g0 -y ok> boot -x clnode evacuate <node> shutdown -i5 -g0 -y ok> boot -x Cluster information € Cluster scstat -pv 3.Sun Cluster 3.Cheat Sheet Page 4 of 12 € 3.1 cluster list -v cluster show cluster status clnode list -v clnode show clnode status cldevice list cldevice show cldevice status clquorum list -v clquorum show clqorum status clinterconnect show clinterconnect status clresource list -v clresource show clresource status 3.1 ##other nodes in cluster scswitch -S -h <host> shutdown -i5 -g0 -y ## Last remaining node scshutdown -g0 -y 3.2 .co. Cheat Sheet Page 5 of 12 Cluster Configuration € Release Integrity check Configure the cluster (add nodes. resource groups.uk/html_docs/sun/sun_cluster_cs. the vote count should be zero for that node. scconf -c -q node=<node>. add data services. etc) Rename Set a property sccheck scinstall scsetup 3. the vote count should be one for that node.datadisk.2 .maintstate cluster status cluster restore-netprops <cluster_name> cluster set -p installmode=enabled clnode add -c <clustername> -n <nodename> -e endpoint1. etc) Cluster configuration utility (quorum. data sevices.Sun Cluster 3.1 cat /etc/cluster/release cluster check scinstall clsetup cluster rename -c <cluster_name> cluster set -p <name>=<value> ## List cluster commands cluster list-cmds ## Display the name of the cluster cluster list 3.2 List ## List the checks cluster list-checks ## Detailed configuration cluster show -t global Status Reset the cluster private network settings Place the cluster into install mode Add a node Remove a node Prevent new nodes from entering scconf –a –T node=<host><host> scconf –r –T node=<host><host> scconf –a –T node=.reset Get a node out of maintenance state Note: use the scstat -q command to verify that the node is in maintenance mode.endpoin clnode remove Put a node into maintenance state Note: use the scstat -q command to verify that the node is in maintenance mode.co. scconf -c -q node=<node>. http://www.htm 9/15/2010 . 1)/clsetup(3. You can use the scsetup (3. n/a n/a scconf –r –q globaldev=d11 ## Evacuate all nodes clquorum add [-t <type>] [-p <name>=<value>] [+ Adding a NAS device to the quorum Adding a Quorum Server Removing a device to the quorum clquorum add -t netapp_nas -p filer=<nasdevice> clquorum add -t quorumserver -p qshost<IPaddres clquorum remove [-t <type>] [+|<devicename>] ## Place the cluster in install mode cluster set -p installmode=enabled ## Remove the quorum device clquorum remove <device> Remove the last quorum device ## Put cluster into maint mode scconf –c –q installmode ## Remove the quorum device http://www.uk/html_docs/sun/sun_cluster_cs.2 . so the total quorum will be all nodes and devices added together.Cheat Sheet Page 6 of 12 Node Configuration € Add a node to the cluster 3.Sun Cluster 3.datadisk.co. € scconf –a –q globaldev=d11 3.1 3.1 Adding a SCSI device to the quorum Note: if you get the error message "uable to scrub device" use scgdevs to add device to the global device namespace.2 clnode add [-c <cluster>] [-n <sponsornode>] \ -e <endpoint> \ -e <endpoint> <node> ## Make sure you are on the node you wish to remove clnode remove scswitch -S -h <node> clnode evacuate <node> clnode clear <node> ## Standard list clnode list [+|<node>] Remove a node from the cluster Evacuate a node from the cluster Cleanup the cluster configuration (used after removing nodes) List nodes ## Destailed list clnode show [+|<node>] Change a nodes property Status of nodes clnode set -p <name>=<value> [+|<node>] clnode status [+|<node>] Admin Quorum Device Quorum devices are nodes and disk devices.htm 9/15/2010 .2) interface to add/remove quorum devices or use the below commands. htm 9/15/2010 .uk/html_docs/sun/sun_cluster_cs.datadisk.Cheat Sheet Page 7 of 12 scconf –r –q globaldev=d11 ## Check the quorum devices scstat –q ## Verify the device has been removed clquorum list -v ## Standard list clquorum list -v [-t <type>] [-n <node>] [+|<de List ## Detailed list clquorum show [-t <type>] [-n <node>] ## Status clquorum status [-t <type>] [-n <node>] [+|<dev scconf –c –q reset Resetting quorum info Note: this will bring all offline quorum devices online ## Obtain the device number scdidadm –L scconf –c –q globaldev=<device>.2 known as enabled) Bring a quorum device out of maintenance mode (3.2 known as disabled) clquorum enable [-t <type>] [+|<devicename>] clquorum disable [-t <type>] [+|<devicename>] Device Configuration € Check device Remove all devices from node 3.co.maintstate scconf –c –q globaldev=<device><device>.1 cldevice check [-n <node>] [+] cldevice clear [-n <node>] ## Turn on monitoring cldevice monitor [-n <node>] [+|<device>] Monitoring ## Turn off monitoring cldevice unmonitor [-n <node>] [+|<device>] Rename Replicate Set properties of a device cldevice rename -d <destination_device_name> cldevice replicate [-S <source-node>] cldevice set -p default_fencing={global|pathcou ## Standard display cldevice status [-s <state>] [-n <node>] Status ## Display failed disk paths cldevice status -s fail http://www.2 .Sun Cluster 3.reset clquorum reset Bring a quorum device into maintenance mode (3. nodelist=<host> scconf –r –D name=<disk group>. Reconfigure the device database.preferenced=true scconf –r –D name=<disk group> 3.htm 9/15/2010 .2 .uk/html_docs/sun/sun_cluster_cs. scdidadm –r Perform the repair procedure for a particular scdidadm –R <c0t0d0s0> .nodelist=<host> scswitch –z –D <disk group> -h <host> scswitch –m –D <disk group> scswitch -z -D <disk group> -h <host> scswitch -z -D <disk group> -h <host> scswitch -F -D <disk group> scconf -c -D name=appdg.1 cldevicegroup create cldevicegroup delete <devgrp cldevicegroup add cldevicegroup remove cldevicegroup set [ ## Standard list cldevicegroup list [ List scstat ## Detailed configuration re cldevicegroup show [ status adding single node Removing single node Switch Put into maintenance mode take out of maintenance mode onlining a disk group offlining a disk group Resync a disk group scstat scconf -a -D type=vxvm.device id path (use then when a disk gets replaced) Disks group € Create a device group Remove a device group Adding Removing Set a property n/a n/a scconf -a -D type=vxvm.sync cldevicegroup status [ cldevicegroup add cldevicegroup remove cldevicegroup switch n/a n/a cldevicegroup online <devgrp cldevicegroup offline <devgr cldevicegroup syn [ Transport Cable http://www.device scdidadm –R 2 . creating new instances numbers if required. ## Standard List cldevice list [-n <node>] [+|<device>] scdidadm –L ## Detailed list cldevice show [-n <node>] [+|<device>] see above cldevice populate cldevice refresh [-n <node>] [+] cldevice repair [-n <node>] [+|<device>] List all the configured devices including paths scdidadm –l on node only.name=appdg.nodelist=<host>:<host>.Cheat Sheet Page 8 of 12 Lists all the configured devices including paths across all nodes.name=appdg.datadisk.Sun Cluster 3.co. Cheat Sheet Page 9 of 12 € Add Remove Enable Disable Note: it gets deleted 3.datadisk.<end Resource Groups € Adding (failover) Adding (scalable) Adding a node to a resource group 3.state=enabled scconf –c –m endpoint=<host>:qfe1.<endpo clinterconnect status [-n <node>][+|<endpoint>.uk/html_docs/sun/sun_cluster_cs.state=disabled clinterconnect enable [-n <node>] [+|<endpoint>.htm 9/15/2010 .Sun Cluster 3.1 scrgadm -a -g <res_group> -h <host>.<host> clresourcegroup create <res_group> clresourcegroup create -S <res_group> clresourcegroup add-node -n <node> <res_group> ## Remove a resource group clresourcegroup delete <res_group> Removing scrgadm –r –g <group> ## Remove a resource group and all its resource clresourcegroup delete -F <res_group> Removing a node from a resource group changing properties Status Listing Detailed List Display mode type (failover or scalable) scrgadm -c -g <resource group> -y <propety=value> scstat -g scstat –g scrgadm –pv –g <res_group> scrgadm -pv -g <res_group> | grep 'Res Group mode' clresourcegroup remove-node -n <node> <res_grou clresourcegroup set -p Failback=true + <name=va clresourcegroup status [-n <node>][clresourcegroup list [-n <node>][-r clresourcegroup show [-n <node>][-r <resource][ ## All resource groups clresourcegroup offline + Offlining scswitch –F –g <res_group> ## Individual group clresourcegroup offline [-n <node>] <res_group> clresourcegroup evacuate [+|-n <node>] http://www.2 .<e scstat scstat List Status ## Standard and detailed list clinterconnect show [-n <node>][+|<endpoint>.1 clinterconnect add <endpoint>.<en clinterconnect disable [-n <node>] [+|<endpoint>.<endpoint> scconf –c –m endpoint=<host>:qfe1.co.<endpoint> clinterconnect remove <endpoint>. datadisk.1 scrgadm –a –L –g <res_group> -l <logicalhost> scrgadm –a –S –g <res_group> -l <logicalhost> scrgadm –a –j apache_res -g <res_group> \ -t SUNW.2 .uk/html_docs/sun/sun_cluster_cs.htm 9/15/2010 .co.Sun Cluster 3.HAStoragePlus \ > -x FileSystemMountPoints=/oracle/data01 \ > -x Affinityon=true scrgadm –r –j res-ip clresource create -t HAStorage -p FilesystemMountPoints=<mount -p Affinityon=true <rs-hasp> clreslogicalhostname create clressharedaddress create adding a shared apache application and attaching the network resource Create a HAStoragePlus failover resource Removing Note: must disable the resource first clresource delete [-g <res_group>][ http://www.apache -y Network_resources_used = <logicalhost> -y Scalable=True –y Port_list = 80/tcp \ -x Bin_dir = /usr/apache/bin scrgadm -a -g rg_oracle -j hasp_data01 -t SUNW.apache -y Network_resources_used = <logicalhost> -y Scalable=False –y Port_list = 80/tcp \ -x Bin_dir = /usr/apache/bin scrgadm –a –j apache_res -g <res_group> \ -t SUNW.Cheat Sheet Page 10 of 12 ## All resource groups clresourcegroup online + Onlining scswitch -Z -g <res_group> ## Individual groups clresourcegroup online [-n <node>] <res_group> Evacuate all resource groups from a node (used when shutting down a node) scswitch –u –g <res_group> clresourcegroup evacuate [+|-n <node>] Unmanaging Note: (all resources in group must be disabled) clresourcegroup unmanage <res_group> Managing Switching Suspend Resume Remaster (move the resource group/s to their preferred node) Restart a resource group (bring offline then online) scswitch –o –g <res_group> scswitch –z –g <res_group> –h <host> n/a n/a n/a n/a clresourcegroup manage <res_group> clresourcegroup switch -n <node> <res_group> clresourcegroup suspend [+|<res_group>] clresourcegroup resume [+|<res_group>] clresourcegroup remaster [+|<res_group>] clresourcegroup restart [-n <node>] Resources € Adding failover network resource Adding shared network resource adding a failover apache application and attaching the network resource 3. 2) Deregistering a resource type from a node Listing Listing resource type properties Show resource types scrgadm –a –t <resource type> n/a scrgadm –r –t <resource type> n/a scrgadm –pv | grep ‘Res Type name’ 3.co.HAStoragePlus clresourcetype register <type> clresourcetype add-node clresourcetype unregister clresourcetype remove-node clresourcetype list [<type>] clresourcetype list-props clresourcetype show [<type>] http://www.htm 9/15/2010 .<host> -j <resource> -f STOP_FAILED scrgadm –pvv –j <resource> | grep –I network ## offline the group scswitch –F –g rgroup-1 ## List properties clresource list-props [-g <res_group clresurce show [-n <node>] [ clresource status [-s <state>][ clresource monitor [-n <node>] [ clresource unmonitor [-n <node>] [ clresource disable <resource> clresource enable <resource> Detailed List Status Disable resoure monitor Enable resource monitor Disabling Enabling Clearing a failed resource Find the network of a resource clresource clear -f STOP_FAILED <res ## offline the group clresourcegroup offline <res_group> ## remove the resource clresource [-g <res_group>][ ## remove the resource group clresourcegroup delete <res_group> Removing a resource and resource group ## remove the resource scrgadm –r –j res-ip ## remove the resource group scrgadm –r –g rgroup-1 Resource Types € Adding (register in 3.Sun Cluster 3.2 .uk/html_docs/sun/sun_cluster_cs.datadisk.e SUNW.Cheat Sheet Page 11 of 12 ## Changing clresource set -t <type> changing or adding properties scrgadm -c -j <resource> -y <property=value> ## Adding clresource set -p <name>+=<value> clresource list [-g <res_group>][ List scstat -g scrgadm –pv –j res-ip scrgadm –pvv –j res-ip scstat -g scrgadm –n –M –j res-ip scrgadm –e –M –j res-ip scswitch –n –j res-ip scswitch –e –j res-ip scswitch –c –h<host>.1 i.2) Register a resource type to a node Deleting (remove in 3. htm 9/15/2010 .datadisk.co.2 .Cheat Sheet Page 12 of 12 Set properties of a resource type clresourcetype set [-p <name>=<value http://www.uk/html_docs/sun/sun_cluster_cs.Sun Cluster 3. Documents Similar To Sun Cluster 3.2 Cheat SheetSkip carouselcarousel previouscarousel nextDELL - SonicwallSingle ClientVCS_AdminWindows Cluster Question AndCluster InformationSun Cluster 3.2 HOW to INSTALL and CONFIGURE TWO Node ClusterVCServer 2003 Cluster InstallationVC_MSCSImpact of Two Levels Switching in a LAN for Cluster ComputingFailover ClusterClustering Lower CostVcs Admin Guide 20032008unit3What is Split BrainOpen Source 1VERITAS Cluster Server for UNIX FundamentalsNode EvitionRed Hat Enterprise Linux-6-High Availability Add-On Overview-En-US5 Install GridCluster AppsIPD - Windows Server VirtualizationWindows Cluster Interview Questions and AnswersHA-VCS-410-201A-2-12-SRT_PGResource Pools EngTop 30 RAC Interview Questions That Helped Meoracle11gr2racimplementationandconcept-120829115006-phpapp01Operating SystemsMscs Config for VMWARERed Hat Enterprise Linux-6-High Availability Add-On Overview-En-USMore From Suheil BukhzamSkip carouselcarousel previouscarousel nextReadmeCertificate Cookbook 311250[1]DS_TZ-Series_USSmf Deepdive TranAug 19_Meet the Experts _ BAC Problem IsolationFooter MenuBack To TopAboutAbout ScribdPressOur blogJoin our team!Contact UsJoin todayInvite FriendsGiftsLegalTermsPrivacyCopyrightSupportHelp / FAQAccessibilityPurchase helpAdChoicesPublishersSocial MediaCopyright © 2018 Scribd Inc. .Browse Books.Site Directory.Site Language: English中文EspañolالعربيةPortuguês日本語DeutschFrançaisTurkceРусский языкTiếng việtJęzyk polskiBahasa indonesiaSign up to vote on this titleUsefulNot usefulYou're Reading a Free PreviewDownloadClose DialogAre you sure?This action might not be possible to undo. Are you sure you want to continue?CANCELOK
Copyright © 2024 DOKUMEN.SITE Inc.