HP P2000 G3 MSA CLI Guide



Hallo guy's, berikut command CLI yang bisa digunakan untuk config Storage HP P2000.
Untuk aksesnya bisa menggunakan SSH, Telnet, via console, HTTP / HTTPS.
Untuk user access defaultnya seperti dibawah ini;

Username
Password
Roles
monitor
!monitor
Monitor (view only)
manage
!manage
Monitor, Manage (view and change)

# clear disk-metadata
Biasanya digunakan saat status disk LEFTOVR karena ada mengalami amber

# show disks
          Location ... How Used ...
          ----------------------...
  •       1.1     ... LEFTOVR  ...
  •       1.2     ... VDISK    ...
  • Clear metadata from a leftover disk:
# clear disk-metadata 1.1
  Info: Updating disk list...
  Info: Disk disk_1.1 metadata was cleared. (2012-01-18 10:35:39)
  Success: Command completed successfully. - Metadata was cleared. (2012-01-18 10:35:39)

# clear events
Biasanya digunakan saat menghilangkan event log yang terjadi pada controler A/B
clear events [a|b|both]
a|b|both
Clear the event log for controller A:
# clear events a  
Success: Command completed successfully. - The event log was successfully
cleared. (2012-01-18 10:40:13)

# restart
Digunakan untuk merestart 
sc|mc
   The controller to restart:    
   • sc: Storage Controller
   • mc: Management Controller

Restart the Management Controller in controller A, which you are logged in to:
# restart mc a
  During the restart process you will briefly lose communication with 
  the specified Management Controller(s).      
  Continue? yes
  Info: Restarting the local MC (A)...
  Success: Command completed successfully. (2012-01-21 11:38:47)
From controller A, restart the Storage Controller in controller B:
# restart sc b
 Success: Command completed successfully. - SC B was restarted. (2012-01-21
          11:42:10)


Tunggu Update selanjutnya ya...hehehhe..

Collect Snapshot XSCF M-Series



Hallo guy's, berikut seputar info seputar XSCF. Oracle SUN M-series memiliki  extended System Control Facility (XSCF) firmware baru dengan konsol yang powerful. XSCF memiliki prosesor sendiri untuk memonitor server hardware. Meskipun server sedang down, XSCF akan tetap hidup selama ada power yang terhubung ke server. 

Ketika Anda menjalankan XSCF snapshot, XSCF akan mengumpulkan informasi server secara lengkap tidak dengan domain tertentu yang telah Anda konfigurasi pada server dan tidak akan ada dampak dengan menjalankan snapshot pada machine. The XSCF perintah snapshot yang mengumpulkan setup konfigurasi, lingkungan, log, kesalahan, dan FRU-ID informasi yang diperlukan untuk melakukan diagnosis.

Dalam mengambil snapshot XSCF, login ke server console.
Catatan: Anda harus memiliki hak platadm atau fieldeng  untuk menjalankan snapshot.


Prosedur ini bias dijalankan dissever dibawah ini;

Sun SPARC Enterprise M3000 Server
Sun SPARC Enterprise M4000 Server

Sun SPARC Enterprise M5000 Server
Sun SPARC Enterprise M9000-32 Server
Sun SPARC Enterprise M9000-64 Server


Dari server XSCF prompt, jalankan perintah di bawah ini dan menentukan lokasi server tujuan Anda untuk menyimpan snapshot.


XSCF> snapshot -L F -t admin@172.16.17.29:/home/maintenance
Downloading Public Key from '172.16.17.29'...
Public Key Fingerprint: 68:5a:d9:02:1b:62:c9:a8:95:1a:52:31:9c:c4:82:b0
Accept this public key (yes/no)? yes
Enter ssh password for user 'admin' on host '172.16.17.29':
Setting up ssh connection to admin@172.16.17.29...
Collecting data into admin@172.16.17.29:/home/maintenance/myglobal-xscf_2015-01-07T20-30-15.zip
Data collection complete
XSCF> exit

Dengan asumsi;
Host tujuan dimana akan disave file snapshot : 172.16.17.29
menggunakan user : admin
directory penyimpanan : /home/maintenance

Remove Hard Disk in Solaris 10


Hello guy's, if you are removing a hard drive while the operating system is still running,
you must remove the drive logically from the operating system before physically
removing it from the server.

If you are removing a hard drive from a server that is powered off, skip to
Step 6 in these procedures.

Use the following instructions in conjunction with the cfgadm(M) man page.

1. Check that the hard drive you want to remove is visible to the operating system.

# format

Searching for disks...done



AVAILABLE DISK SELECTIONS:

       0. c1t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>

          /pci@1c,600000/scsi@2/sd@0,0

       1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>

          /pci@1c,600000/scsi@2/sd@1,0

Specify disk (enter its number):

2. Determine the correct Ap_Id label for the hard drive that you want to remove.

# cfgadm -al

Ap_Id           Type       Receptacle Occupant      Condition

c0              scsi-bus   connected  configured    unknown

c0::dsk/c0t0d0  CD-ROM     connected  configured    unknown

c1              scsi-bus   connected  configured    unknown

c1::dsk/c1t0d0  disk       connected  configured    unknown

c1::dsk/c1t1d0  disk       connected  configured    unknown

c2              scsi-bus   connected  unconfigured  unknown

usb0/1          unknown    empty      unconfigured  ok

usb0/2          unknown    empty      unconfigured  ok


Caution - Before proceeding, you must remove the hard drive from all its software
mount positions and delete any swap areas in use on the disk. If the drive is the
system boot device, do not proceed further with these instructions.
               Do not attempt to unconfigure the boot disk.

3. Unconfigure the hard drive that you intend to remove.

Use the unconfigure command and specify the device you intend to remove. For example, if it is Disk 1, type:

# cfgadm -c unconfigure c1::dsk/c1t1d0


4. Check that the device is now unconfigured:

# cfgadm -al

Ap_Id           Type        Receptacle Occupant      Condition

c0              scsi-bus    connected  configured    unknown

c0::dsk/c0t0d0  CD-ROM      connected  configured    unknown

c1              scsi-bus    connected  configured    unknown

c1::dsk/c1t0d0  disk        connected  configured    unknown

c1::dsk/c1t1d0  unavailable connected  unconfigured  unknown

c2              scsi-bus    connected  unconfigured  unknown

usb0/1          unknown     empty      unconfigured  ok

usb0/2          unknown     empty      unconfigured  ok


5. Confirm that the hard drive you want to remove from the server is no longer visible to the operating system:

# format

Searching for disks...done



AVAILABLE DISK SELECTIONS:

       0. c1t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>

          /pci@1c,600000/scsi@2/sd@0,0

Specify disk (enter its number): 


6. Ensure that the server is properly grounded.

7. Grip the bezel at the two finger holds and rotate it down to open

8. Check that the blue indicator LED is lit on the hard drive.

The blue LED comes on when the hard drive is ready to remove.

9. Slide the catch at the front of the hard drive to the right.


10. Pull the handle and remove the hard drive from the server by sliding it out from its bay.

Update OBP SunFire V240



Setelah update ALOM, berikut step by step untuk update OBP melalui OS, untuk file OBP nya bias langsung download disini.

1. Keluarkan semua aplikasi yang sedang berjalan, karena setelah proses selesai system akan direboot otomatis.

2. Login menggunakan super user/root.

3. Buat folder pada " / " misalkan;
    # mkdir /OBP
    # cd /OBP

4. Tempatkan File 142700-02.zip pada direktori /OBP

5. Unzip file 142700-02.zip
    # unzip 142700-02.zip

6. Ubah script unix.flash-update.SunFire240.sh agar bisa dijalankan
    # chmod a+x unix.flash-update.SunFire240.sh

7. Jalankan file : unix.flash-update.SunFire240.sh
   berikut contoh proses dari script,

   # ./unix.flash-update.SunFire240.sh
 
   Flash Update 2.3: Program and system initialization in progress...
   Mar 19 14:01:43 wgs49-230 ebus: flashprom0 at ebus0: offset 0,0
   Mar 19 14:01:43 wgs49-230 genunix: flashprom0 is
   /pci@9,700000/ebus@1/flashprom@0,0

   Current System Flash PROM Revision:
   -----------------------------------
   OBP 4.10.5 2003/05/22 13:58

   Available System Flash PROM Revision:
   -------------------------------------
   OBP 4.10.10 2003/08/29 06:25

   NOTE: The system will be rebooted (reset) after the firmware has been
   updated.
   However, if an error occurs then the system will NOT be rebooted.

   Do you wish to update the firmware in the system Flash PROM? yes/no : yes

   Erasing the top half of the Flash PROM.
   Programming OBP into the top half of the Flash PROM.
   Verifying OBP in the top half of the Flash PROM.

   Erasing the bottom half of the Flash PROM.
   Programming OBP into the bottom half of Flash PROM.
   Verifying OBP in the bottom half of the Flash PROM.

   Erasing the top half of the Flash PROM.
   Programming POST into the top half of Flash PROM.
   Verifying POST in the top half of the Flash PROM.

   The system's Flash PROM firmware has been updated.
 
   Please wait while the system is rebooted...


8. System akan reboot reboot setelah proses Flash PROM update selesai.

Update firmware ALOM (Advanced Lights Out Manager) 1.6.10 Sun Fire & Netra


Hallo agan-agan sekalian, berikut step by step update ALOM 1.6.10 untuk server Sun Fire V125, V210, V215, V240, V245, V250, V440, V445 and Netra 210, 240 and 440 servers. Untuk firmwarenya bisa download disini.

1. SSH /telnet kedalam OS menggunakan user root/superuser.

   Perhatian: Jangan login menggunakan SERIAL MGT port.

2. Pindah kedalam direktori dibawah ini:

# cd /usr/platform/`uname -i`/lib

3. Create direktori "images":

# mkdir images

4. Masuk kedalam direktori images:

# cd images

5. Upload file firmware ALOM_1.6.10 kedalam folder images:

ALOM_1.6.10_fw_hw0.tar.gz

6. Unpack firmware file:

        # gzcat ALOM_1.6.10_fw_hw0.tar.gz | tar xf -

Berikut isi dari file firware ALOM
      README (this file)
      copyright
      Legal/ (directory containing Licence, Entitlement and Third Party Readmes)
      alombootfw (boot image file)
      alommainfw (main image file)

7. Load boot image file alombootfw kedalam system Controller hardware:

# /usr/platform/`uname -i`/sbin/scadm download boot alombootfw

8. Tunggu proses selesai , berkisar 60 detik.

9. Load main image file alommainfw kedalam system Controller hardware:

    # /usr/platform/`uname -i`/sbin/scadm download alommainfw

     Waktu yang dibutuhkan sekitar 2 menit setelah proses load boot image.

10. Setelah selesai delete file firmware ALOM:

# rm ALOM_1.6.10_fw_hw0.tar.gz

Step by Step install Solaris Cluster

root@nsx01 # mount -Fhsfs /dev/lofi/1 /source/clust
root@nsx01 # cd /source/clust/Solaris_sparc/
root@nsx01 # ./installer Unable to access a usable display on the remote system. Continue in command-line mode?(Y/N)Y

Java Accessibility Bridge for GNOME loaded.


   Welcome to Oracle(R) Solaris Cluster; serious software made simple...


Before you begin, refer to the Release Notes and Installation Guide for the   products that you are installing. This documentation is available at http://www.oracle.com/technetwork/indexes/documentation/index.html.

You can install any or all of the Services provided by Oracle Solaris   Cluster.Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

<Press ENTER to Continue>


Installation Type
-----------------

Do you want to install the full set of Oracle Solaris Cluster Products and   Services? (Yes/No) [Yes] {"<" goes back, "!" exits} Yes   Install multilingual package(s) for all selected components [Yes] {"<" goes   back, "!" exits}: yes
Checking System Status
  
    Available disk space...        : Checking .... OK               

    Memory installed...            : Checking .... OK               
    Swap space installed...        : Checking .... OK               
    Operating system patches...    : Checking .... OK               
    Operating system resources...  : Checking .... OK                


System ready for installation                   

   Enter 1 to continue [1] {"<" goes back, "!" exits} 1

Screen for selecting Type of Configuration
1. Configure Now - Selectively override defaults or express through
2. Configure Later - Manually configure following installation
   Select Type of Configuration [1] {"<" goes back, "!" exits}2


Ready to Install
----------------

The following components will be installed.
Product: Oracle Solaris ClusterUninstall Location:/var/sadm/prod/SUNWentsyssc33u1
Space Required: 667.18 MB
---------------------------------------------------
Java DB
  Java DB Server
  Java DB Client
Oracle Solaris Cluster 3.3u1
  Oracle Solaris Cluster Core
  Oracle Solaris Cluster Manager
Oracle Solaris Cluster Agents 3.3u1
  Oracle Solaris Cluster HA for Java(TM) System Application Server
  Oracle Solaris Cluster HA for Java(TM) System Message Queue       
  Oracle Solaris Cluster HA for Java(TM) System Messaging Server
  Oracle Solaris Cluster HA for Java(TM) System Calendar Server
  Oracle Solaris Cluster HA for Java(TM) System Directory Server
  Oracle Solaris Cluster HA for Java(TM) System Application Server EE
  Oracle Solaris Cluster HA for Instant Messaging
  Oracle Solaris Cluster HA/Scalable for Java(TM) System Web Server
  Oracle Solaris Cluster HA for Apache Tomcat
  Oracle Solaris Cluster HA for Apache
  Oracle Solaris Cluster HA for DHCP
  Oracle Solaris Cluster HA for DNS
  Oracle Solaris Cluster HA for MySQL
  Oracle Solaris Cluster HA for Sun N1 Service Provisioning System
  Oracle Solaris Cluster HA for NFS
  Oracle Solaris Cluster HA for Oracle
  Oracle Solaris Cluster HA for Agfa IMPAX
  Oracle Solaris Cluster HA for Samba
  Oracle Solaris Cluster HA for Sun N1 Grid Engine
  Oracle Solaris Cluster HA for Solaris Containers
  Oracle Solaris Cluster Support for Oracle RAC
  Oracle Solaris Cluster HA for Oracle E-Business Suite       
  Oracle Solaris Cluster HA for SAP liveCache
  Oracle Solaris Cluster HA for WebSphere Message Broker      
  Oracle Solaris Cluster HA for WebSphere MQ
  Oracle Solaris Cluster HA for Oracle 9iAS
  Oracle Solaris Cluster HA for SAPDB
  Oracle Solaris Cluster HA for SAP Web Application Server     
  Oracle Solaris Cluster HA for SAP
  Oracle Solaris Cluster HA for PostgreSQL
  Oracle Solaris Cluster HA for Sybase ASE
  Oracle Solaris Cluster HA for BEA WebLogic Server
  Oracle Solaris Cluster HA for Siebel
  Oracle Solaris Cluster HA for Kerberos
  Oracle Solaris Cluster HA for Swift Alliance Access
  Oracle Solaris Cluster HA for Swift Alliance Gateway
  Oracle Solaris Cluster HA for Informix
  Oracle Solaris Cluster HA for xVM Server SPARC Guest Domains 
  Oracle Solaris Cluster HA for PeopleSoft Enterprise
  Oracle Solaris Cluster HA for Oracle Business Intelligence
  Oracle Solaris Cluster HA for TimesTen
  Oracle Solaris Cluster Geographic Edition 3.3u1
  Oracle Solaris Cluster Geographic Edition Core Components    
  Oracle Solaris Cluster Geographic Edition Manager
  Sun StorEdge Availability Suite Data Replication Support    
  Hitachi Truecopy Data Replication Support
  SRDF Data Replication Support
  Oracle Data Guard Data Replication Support
  Oracle Solaris Cluster Geographic Edition Script-Based Plugin
  ReplicationSupport
Quorum Server
Java(TM) System High Availability Session Store 4.4.3


1. Install
2. Start Over
3. Exit Installation

What would you like to do [1]{"<" goes back, "!" exits}? 1

Oracle Solaris Cluster
|-1%--------------25%-----------------50%-------May 28 13:03:31 nsx01 syseventd[147]:
SIGHUP caught - reloading modules
--May 28 13:03:32 nsx01 Cluster.CCR: Daemon restarted--------75%--------------100%|

Installation Complete

Software installation has completed successfully. You can view the installationsummary and log by using the choices below. Summary and log files are availablein /var/sadm/install/logs/.


Your next step is to perform the postinstallation configuration andverification tasks documented in the Postinstallation Configuration and StartupChapter of the Java(TM) Enterprise System Installation Guide. See: http://download.oracle.com/docs/cd/E19528-01/820-2827.

Enter 1 to view installation summary and Enter 2 to view installation logs   [1] {"!" exits} 1

Installation Summary Report
Install Summary
Oracle Solaris Cluster : Installed
Java DB : Installed,
Configure After Install
Oracle Solaris Cluster 3.3u1 : Installed, Configure After Install
Oracle Solaris Cluster Agents 3.3u1 : Installed, Configure After Install
Oracle Solaris Cluster Geographic Edition 3.3u1 : Installed, Configure After
Install
Quorum Server : Installed, Configure After Install
Java(TM) System High Availability Session Store 4.4.3 : Installed
Configuration Data
The configuration log is saved in :  /var/sadm/install/logs/JavaES_Install_log.895810212

Enter 1 to view installation summary and Enter 2 to view installation logs   [1] {"!" exits} 2


Installation LogInstalling Oracle Solaris Cluster 
Log file: /var/sadm/install/logs/Oracle_Solaris_Cluster_install.B05281300
Installed:/var/sadm/prod/SUNWentsyssc33u1/uninstall_Sun_Java_tm__Enterprise_System_5.class
Uninstaller is at:/var/sadm/prod/SUNWentsyssc33u1/uninstall_Sun_Java_tm__Enterprise_System_5.class
JavaDBCommonInstalling Package: SUNWjavadb-commonCopyright 2006 Sun Microsystems, Inc.  All rights reserved.Use is subject to license terms.125 blocks
Processing package instance <SUNWjavadb-common> from</source/clust/Solaris_sparc/Product/shared_components/Packages>Java DB common files(sparc) 10.1.3,REV=1.2Using </opt> as the package base directory.
## Processing package information.
## Processing system information.
Installing Java DB common files as <SUNWjavadb-common>
## Installing part 1 of 1.
Installation of <SUNWjavadb-common> was successful.
Installed Package: SUNWjavadb-common
Java DB Server

<--[0%]--[ENTER To Continue]--[n To Finish]--> {"!" exits} !In order to notify you of potential updates, we need to confirm an internet connection. Do you want to proceed [Y/N] : N

Create ZFS Pool from new disk

Agan agan sekalian, berikut step by step melakukan penambahan disk kedalam ZFS pool.



Perbedaan ZFS & Tradisional File Systems


root@topaz # devfsadm -c disk (reconfigure new attached devices)
root@topaz # 
root@topaz # 
root@topaz # cfgadm -al (melihat semua disk didalam server)
May 10 15:24:33 topaz   Corrupt label; wrong magic number
May 10 15:24:34 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000238 (ssd18):
May 10 15:24:34 topaz   Corrupt label; wrong magic number
May 10 15:24:34 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000237 (ssd19):
May 10 15:24:34 topaz   Corrupt label; wrong magic number
May 10 15:24:34 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000236 (ssd20):
May 10 15:24:34 topaz   Corrupt label; wrong magic number
May 10 15:24:34 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000235 (ssd21):
May 10 15:24:34 topaz   Corrupt label; wrong magic number
Ap_Id                          Type         Receptacle   Occupant     Condition
SB0                            System_Brd   connected    configured   ok
SB0::cpu1                      cpu          connected    configured   ok
SB0::memory                    memory       connected    configured   ok
SB0::pci2                      io           connected    configured   ok
SB0::pci3                      io           connected    configured   ok
SB0::pci8                      io           connected    configured   ok
SB1                                         disconnected unconfigured unknown
SB2                                         disconnected unconfigured unknown
SB3                                         disconnected unconfigured unknown
SB4                                         disconnected unconfigured unknown
SB5                                         disconnected unconfigured unknown
SB6                                         disconnected unconfigured unknown
SB7                                         disconnected unconfigured unknown
SB8                                         disconnected unconfigured unknown
SB9                                         disconnected unconfigured unknown
SB10                                        disconnected unconfigured unknown
SB11                                        disconnected unconfigured unknown
SB12                                        disconnected unconfigured unknown
SB13                                        disconnected unconfigured unknown
SB14                                        disconnected unconfigured unknown
SB15                                        disconnected unconfigured unknown
c0                             fc-fabric    connected    configured   unknown
c0::50060e8005c0c030           disk         connected    configured   unknown
c1                             fc-fabric    connected    configured   unknown
c1::50060e8005c0c020           disk         connected    configured   unknown
cfgadm: Configuration administration not supported: Error: hotplug service is probably not running, please use 'svcadm enable hotplug' to enable the service. See cfgadm_shp(1M) for more details.
root@topaz # 
root@topaz # format
Searching for disks...May 10 15:24:52 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000241 (ssd9):
May 10 15:24:52 topaz   Corrupt label; wrong magic number
May 10 15:24:52 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000236 (ssd20):
May 10 15:24:52 topaz   Corrupt label; wrong magic number
May 10 15:24:52 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000236 (ssd20):
May 10 15:24:52 topaz   Corrupt label; wrong magic number
May 10 15:24:52 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000235 (ssd21):
May 10 15:24:52 topaz   Corrupt label; wrong magic number
May 10 15:24:52 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000235 (ssd21):
May 10 15:24:52 topaz   Corrupt label; wrong magic number
done

c2t60060E8005C0C0000000C0C00000023Ad0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C00000023Bd0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C00000023Cd0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C00000023Dd0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C00000023Ed0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C00000023Fd0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C000000235d0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C000000236d0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C000000237d0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C000000238d0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C000000239d0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C000000240d0: configured with capacity of 49.99GB
c2t60060E8005C0C0000000C0C000000241d0: configured with capacity of 49.99GB


AVAILABLE DISK SELECTIONS:
       0. c2t60060E8005C0C0000000C0C00000023Ad0 <HITACHI-OPEN-V-SUN-6007 cyl 13651 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023a
       1. c2t60060E8005C0C0000000C0C00000023Bd0 <HITACHI-OPEN-V-SUN-6007 cyl 13651 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023b
       2. c2t60060E8005C0C0000000C0C00000023Cd0 <HITACHI-OPEN-V-SUN-6007 cyl 13651 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023c
       3. c2t60060E8005C0C0000000C0C00000023Dd0 <HITACHI-OPEN-V-SUN-6007 cyl 13651 alt 2 hd 15 sec 512>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023d
root@topaz # 

root@topaz # zpool create -m none gfx1-pool c2t60060E8005C0C0000000C0C00000023A
d0 c2t60060E8005C0C0000000C0C00000023Bd0 c2t60060E8005C0C0000000C0C00000023Cd0 ( Disk yang akan masuk ke dalam zpool )

May 10 15:29:34 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023a (ssd16):
May 10 15:29:34 topaz   Corrupt label; wrong magic number
May 10 15:29:34 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023b (ssd15):
May 10 15:29:34 topaz   Corrupt label; wrong magic number
May 10 15:29:34 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023c (ssd14):
May 10 15:29:34 topaz   Corrupt label; wrong magic number


root@topaz # zpool list
NAME        SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
gfx1-pool   149G   125K   149G     0%  ONLINE  -
root@topaz # zpool status
  pool: gfx1-pool
 state: ONLINE
 scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        gfx1-pool                                ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Ad0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Bd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Cd0  ONLINE       0     0     0


root@topaz # zpool add -f gfx1-pool c2t60060E8005C0C0000000C0C00000023Ed0 ( Menambahkan Disk)
May 10 15:30:51 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023e (ssd12):
May 10 15:30:51 topaz   Corrupt label; wrong magic number
errors: No known data errors

root@topaz # zpool create -m none gfx2-pool c2t60060E8005C0C0000000C0C00000023F
d0 c2t60060E8005C0C0000000C0C000000235d0 ( Membuat pool baru )
May 10 15:33:13 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023f (ssd11):
May 10 15:33:13 topaz   Corrupt label; wrong magic number
May 10 15:33:13 topaz scsi: WARNING: /scsi_vhci/ssd@g60060e8005c0c0000000c0c000000235 (ssd21):
May 10 15:33:13 topaz   Corrupt label; wrong magic number

root@topaz # 
root@topaz # 
root@topaz # 
root@topaz # 
root@topaz # zpool list (menampilkan semua zpool)
NAME        SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
gfx1-pool   249G   110K   249G     0%  ONLINE  -
gfx2-pool  99.5G   124K  99.5G     0%  ONLINE  -
root@topaz # 
root@topaz # 
root@topaz # 
root@topaz # zpool status
  pool: gfx1-pool
 state: ONLINE
 scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        gfx1-pool                                ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Ad0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Bd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Cd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Dd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Ed0  ONLINE       0     0     0

errors: No known data errors

  pool: gfx2-pool
 state: ONLINE
 scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        gfx2-pool                                ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Fd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000235d0  ONLINE       0     0     0

errors: No known data errors
root@topaz # zpool status | grep gfx2=pool
root@topaz # zpool status | grep gfx2-pool
  pool: gfx2-pool
        gfx2-pool                                ONLINE       0     0     0
root@topaz # zpool status gfx2-pool       
  pool: gfx2-pool
 state: ONLINE
 scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        gfx2-pool                                ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Fd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000235d0  ONLINE       0     0     0

errors: No known data errors
root@topaz # zpool status          
  pool: gfx1-pool
 state: ONLINE
 scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        gfx1-pool                                ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Ad0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Bd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Cd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Dd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Ed0  ONLINE       0     0     0

errors: No known data errors

  pool: gfx2-pool
 state: ONLINE
 scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        gfx2-pool                                ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Fd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000235d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000236d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000237d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000238d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000239d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000240d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000241d0  ONLINE       0     0     0

errors: No known data errors
root@topaz # 
root@topaz # 
root@topaz # 
root@topaz # format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c2t60060E8005C0C0000000C0C00000023Ad0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023a
       1. c2t60060E8005C0C0000000C0C00000023Bd0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023b
       2. c2t60060E8005C0C0000000C0C00000023Cd0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023c
       3. c2t60060E8005C0C0000000C0C00000023Dd0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023d
root@topaz # format
format> disk    


AVAILABLE DISK SELECTIONS:
       0. c2t60060E8005C0C0000000C0C00000023Ad0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023a
Specify disk (enter its number)[0]: 0
selecting c2t60060E8005C0C0000000C0C00000023Ad0
[disk formatted]
format> volname
Enter 8-character volume name (remember quotes)[""]:^C
format> 
format> p


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit
partition> p
Current partition table (original):
Total disk sectors available: 104842462 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm               256       49.99GB          104842462    
  1 unassigned    wm                 0           0               0    
  2 unassigned    wm                 0           0               0    
  3 unassigned    wm                 0           0               0    
  4 unassigned    wm                 0           0               0    
  5 unassigned    wm                 0           0               0    
  6 unassigned    wm                 0           0               0    
  8   reserved    wm         104842463        8.00MB          104858846    

root@topaz # 
root@topaz # format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c2t60060E8005C0C0000000C0C00000023Ad0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023a
       1. c2t60060E8005C0C0000000C0C00000023Bd0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023b
       2. c2t60060E8005C0C0000000C0C00000023Cd0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023c
       3. c2t60060E8005C0C0000000C0C00000023Dd0 <HITACHI-OPEN-V      -SUN-6007-50.00GB>
          /scsi_vhci/ssd@g60060e8005c0c0000000c0c00000023d
- hit space for more or s to select - 
root@topaz # 
root@topaz # svcadm enable hotplug
root@topaz # 
root@topaz # 
root@topaz # 
root@topaz # cfgadm -al
Ap_Id                          Type         Receptacle   Occupant     Condition
SB0                            System_Brd   connected    configured   ok
SB0::cpu1                      cpu          connected    configured   ok
SB0::memory                    memory       connected    configured   ok
SB0::pci2                      io           connected    configured   ok
SB0::pci3                      io           connected    configured   ok
SB0::pci8                      io           connected    configured   ok
SB1                                         disconnected unconfigured unknown
SB2                                         disconnected unconfigured unknown
SB3                                         disconnected unconfigured unknown
SB4                                         disconnected unconfigured unknown
SB5                                         disconnected unconfigured unknown
SB6                                         disconnected unconfigured unknown
SB7                                         disconnected unconfigured unknown
SB8                                         disconnected unconfigured unknown
SB9                                         disconnected unconfigured unknown
SB10                                        disconnected unconfigured unknown
SB11                                        disconnected unconfigured unknown
SB12                                        disconnected unconfigured unknown
SB13                                        disconnected unconfigured unknown
SB14                                        disconnected unconfigured unknown
SB15                                        disconnected unconfigured unknown
c0                             fc-fabric    connected    configured   unknown
c0::50060e8005c0c030           disk         connected    configured   unknown
c1                             fc-fabric    connected    configured   unknown
c1::50060e8005c0c020           disk         connected    configured   unknown
iou#0-pci#3                    fibre/hp     connected    configured   ok
iou#0-pci#4                    etherne/hp   connected    configured   ok
root@topaz # 
root@topaz # 
root@topaz # zfs list
NAME        USED  AVAIL  REFER  MOUNTPOINT
gfx1-pool   110K   245G    31K  none
gfx2-pool   110K   392G    31K  none
root@topaz # zfs create -o mountpoint=/gfx/data gfx1-pool/data ( Create Mountpoint )
root@topaz # df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d10         20G    11G   8.7G    56%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    45G   1.7M    45G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
fd                       0K     0K     0K     0%    /dev/fd
/dev/md/dsk/d30         16G   1.1G    14G     8%    /var
swap                    45G    32K    45G     1%    /tmp
swap                    45G    88K    45G     1%    /var/run
/dev/md/dsk/d501       482M   1.0M   433M     1%    /globaldevices
/dev/md/dsk/d40         20G    55M    19G     1%    /opt
gfx1-pool/data         245G    31K   245G     1%    /gfx/data

root@topaz # 
root@topaz # 
root@topaz # zfs list
NAME             USED  AVAIL  REFER  MOUNTPOINT
gfx1-pool        158K   245G    31K  none
gfx1-pool/data    31K   245G    31K  /gfx/data
gfx2-pool        110K   392G    31K  none
root@topaz # zfs create -o mountpoint=/gfx/backup gfx2-pool/backup
root@topaz # zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
gfx1-pool          158K   245G    31K  none
gfx1-pool/data      31K   245G    31K  /gfx/data
gfx2-pool          158K   392G    31K  none
gfx2-pool/backup    31K   392G    31K  /gfx/backup

root@topaz # mkfile 1m test (membuat file 1 Megabyte)
root@topaz # ls -al
total 6
drwxr-xr-x   2 root     root           3 May 10 15:43 .
drwxr-xr-x   4 root     root         512 May 10 15:42 ..
-rw------T   1 root     root     1048576 May 10 15:43 test
root@topaz # zpool status
  pool: gfx1-pool
 state: ONLINE
 scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        gfx1-pool                                ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Ad0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Bd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Cd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Dd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Ed0  ONLINE       0     0     0

errors: No known data errors

  pool: gfx2-pool
 state: ONLINE
 scan: none requested
config:

        NAME                                     STATE     READ WRITE CKSUM
        gfx2-pool                                ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C00000023Fd0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000235d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000236d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000237d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000238d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000239d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000240d0  ONLINE       0     0     0
          c2t60060E8005C0C0000000C0C000000241d0  ONLINE       0     0     0


errors: No known data errors