Ops Center is Oracle's
management and monitoring tool for Oracle Solaris.
Premier Support for the
current version 12.4 ends April 2024.
VDCF could be an alternative for customers with focus beyond April 2024.
Find more information
about VDCF
https://www.jomasoft.ch/vdcf/
After the import VDCF can be used to operate, manage, update and monitor this systems.
where one is running Solaris 10 and the second is running Solaris 11.4. The process is done in a few minutes. It is required to execute 2 commands on each system to install the JomaSoft VDCF Client pkg and add the required ssh key.
On Solaris 11.4 SRU60 (Aug 2023) and later we need to use 2 commands
# wget http://vdcf/pkg/$(uname -p)/JSvdcf-client.pkg
# yes | pkgadd -d ./JSvdcf-client.pkg all
On older Solaris versions one command works
# yes | pkgadd -d http://vdcf/pkg/$(uname -p)/JSvdcf-client.pkg all
..............25%..............50%..............75%..............100%
## Download Complete
Processing package instance <JSvdcf-client> from <http://vdcf/pkg/sparc/JSvdcf-client.pkg>
JomaSoft VDCF -
Client(sparc) 8.3.5
All rights reserved.
Use is subject to license terms.
## Executing checkinstall script.
Using </opt/jomasoft/vdcf> as the package base directory.
## Processing package information.
## Processing system information.## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
permission during the process of installing this package.
Installing JomaSoft VDCF - Client as <JSvdcf-client>
/opt/jomasoft/vdcf/client/conf/patch_kernel.cfg
/opt/jomasoft/vdcf/client/pkgs/JSvdcf-sync.pkg
/opt/jomasoft/vdcf/client/release
/opt/jomasoft/vdcf/client/rexec/asr_mgr_config
/opt/jomasoft/vdcf/client/rexec/cdom_config
<snip>
/opt/jomasoft/vdcf/client/smf/vdcf_iscsi.xml
/opt/jomasoft/vdcf/client/smf/zfs_encryption_load_key.xml/opt/jomasoft/vdcf/client/smf/zfs_on_nfs.xml[ verifying class <none> ]
## Executing postinstall script.*** MANUAL-TASK TODO ***
/opt/jomasoft/vdcf/client/sbin/update_key -u <FLASH_WEBSERVER_URL>
Installation of <JSvdcf-client> was successful.
#
/opt/jomasoft/vdcf/client/sbin/update_key -u http://vdcf
Obtaining public key ...
done.
Step 2 / Import the systems into VDCF
The step to integrate the systems into the VDCF tool is by executing a single simple node import command on the VDCF central management server.
First we have to import the control domain.
-bash-5.1$ time node -c import name=s0012
Importing new Node s0012 ...
Warning: Permanently added 's0012,192.168.100.12' (ED25519) to the list of known hosts.Discover Systeminfo ...
Discover Rootdiskinfo ...
Discover Diskinfo ...
This may take some time, it depends on the number of disks
.......................
Discover Netinfo ...
Node configuration successfully added.
WARN: No matching build
found. Patchlevel 4.48.0.1.126.1 set as build name.
System registration done
for s0012.
registering disks from
node s0012
New visible Lun
60002AC000000000000004AF0001507B Size: 30.00 GB
New visible Lun
60002AC000000000000004C00001507B Size: 30.00 GB
New visible Lun
60002AC000000000000004C20001507B Size: 30.00 GB
New visible Lun
60002AC000000000000004C80001507B Size: 30.00 GB
New visible Lun
60002AC000000000000004CC0001507B Size: 30.00 GB
Registered BootDisk of
Node s0012: 60002AC000000000000004D20001507B Size: 50.00 GB
Registered new Lun:
60002AC000000000000004E00001507B Size: 30.00 GB
Registered new Lun:
60002AC000000000000004E10001507B Size: 30.00 GB
Found Control Domain s0012
(primary).
Found new control domain
on 's0012'
importing node datasets
from node s0012
Root Dataset s0012_root
(ZPOOL: rpool) with Size 49.79 GB successfully imported from Node
s0012
Successfully added node
filesystem 'ROOT/s11.4.48.0.1.126.1' with mountpoint '/' (ZPOOL:
rpool) to dataset 's0012_root'
Successfully added node
filesystem 'ROOT/s11.4.48.0.1.126.1/var' with mountpoint '/var'
(ZPOOL: rpool) to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE' with mountpoint '/var/share' (ZPOOL: rpool) to
dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/tmp' with mountpoint '/var/tmp' (ZPOOL: rpool)
to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/kvol' with mountpoint '/var/share/kvol' (ZPOOL:
rpool) to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/zones' with mountpoint '/system/zones' (ZPOOL:
rpool) to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/cores' with mountpoint '/var/share/cores'
(ZPOOL: rpool) to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/crash' with mountpoint '/var/share/crash'
(ZPOOL: rpool) to dataset 's0012_root'
Successfully added node
filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to
dataset 's0012_root'
Successfully added node
filesystem 'export/home' with mountpoint '/export/home' (ZPOOL:
rpool) to dataset 's0012_root'
Successfully added node
filesystem 'export/home/admin' with mountpoint '/export/home/admin'
(ZPOOL: rpool) to dataset 's0012_root'
Successfully added node
filesystem 'guests' with mountpoint '/guests' (ZPOOL: rpool) to
dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/sstore' with mountpoint '/var/share/sstore/repo'
(ZPOOL: rpool) to dataset 's0012_root'
No vServer found on Node
s0012.
WARN: Add console configuration manually using: console -c add node=s0012
Node s0012 import finishedreal 1m18.610s
sys 0m56.736s
Now we can import the ldom1 with Solaris 10 and ldom2 with Solaris 11.4
Importing new Node ldom1 ...
Warning: Permanently added 'ldom1,192.168.20.201' (RSA) to the list of known hosts.Discover Systeminfo ...
psrinfo: Physical processor view not supported
Discover Rootdiskinfo ...
Discover Diskinfo ...
This may take some time, it depends on the number of disks
Discover Netinfo ...
Discovered Node ldom1 as
Guest Domain ldom1 on Control Domain s0012
Discovering CDom s0012 ...
Found Control Domain s0012 (primary).
using partitioning.cfg as profile for root disk.
GDom ldom1 (Imported GDom) is created.
Commit guest domain to update configuration
AutoBoot for GDom ldom1 set to false
Network 'management' for GDom 'ldom1' defined
Assigned disk 60002AC000000000000004AF0001507B as ROOTDISK to GDom ldom1
committing guest domain <ldom1>
Updating ActiveBuild from '-' to 'imported' for Node ldom1
System registration done for ldom1.
node with all vservers being checked: ldom1
check on node ldom1 successful
patch deployment updated from node ldom1
registering disks from node ldom1
importing node datasets from node ldom1
Root Dataset ldom1_root (ZPOOL: rpool) with Size 29.79 GB successfully imported from Node ldom1
Successfully added node filesystem 'ROOT/_SUNWCAT_S' with mountpoint '/' (ZPOOL: rpool) to dataset 'ldom1_root'
Successfully added node filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to dataset 'ldom1_root'
Successfully added node filesystem 'export/home' with mountpoint '/export/home' (ZPOOL: rpool) to dataset 'ldom1_root'
No vServer found on Node ldom1.
Node ldom1 import finished
real 1m3.560s
sys 0m37.529s
-bash-5.1$ time node -c import name=ldom2
Importing new Node ldom2 ...
Warning: Permanently added 'ldom2,192.168.20.202' (ED25519) to the list of known hosts.Discover Systeminfo ...
Discover Rootdiskinfo ...
Discover Diskinfo ...
This may take some time, it depends on the number of disks
Discover Netinfo ...
Discovered Node ldom2 as
Guest Domain ldom2 on Control Domain s0012
Importing Guest Domain
ldom2 ...
Discovering CDom s0012 ...
using partitioning.cfg as profile for root disk.
GDom ldom2 (Imported GDom) is created.
Commit guest domain to update configuration
AutoBoot for GDom ldom2 set to false
Network 'management' for GDom 'ldom2' defined
Assigned disk 60002AC000000000000004E00001507B as ROOTDISK to GDom ldom2
committing guest domain <ldom2>
WARN: No matching build found. Patchlevel 4.0.0.1.15.0 set as build name.
Updating ActiveBuild from '-' to '4.0.0.1.15.0' for Node ldom2
System registration done for ldom2.
registering disks from node ldom2
importing node datasets from node ldom2
Root Dataset ldom2_root (ZPOOL: rpool) with Size 29.79 GB successfully imported from Node ldom2
Successfully added node filesystem 'ROOT/solaris' with mountpoint '/' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'ROOT/solaris/var' with mountpoint '/var' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE' with mountpoint '/var/share' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE/tmp' with mountpoint '/var/tmp' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE/kvol' with mountpoint '/var/share/kvol' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE/zones' with mountpoint '/system/zones' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'export/home' with mountpoint '/export/home' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'export/home/admin' with mountpoint '/export/home/admin' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'guests' with mountpoint '/guests' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE/sstore' with mountpoint '/var/share/sstore/repo' (ZPOOL: rpool) to dataset 'ldom2_root'No vServer found on Node ldom2.
Node ldom2 import finished
real 1m20.849s
user 0m18.075s
-bash-5.1$ gdom -c show cdom=s0012
Name cState rState CDom Model OS Patch-Level
Cores
Max-Cores VCPUs RAM/GB #V Comment
ldom1 ACTIVE ACTIVE (RUNNING) s0012 ORCL,SPARC-S7-2 10 150400-40 (U11+) 0 0 1 4.0 0 Imported GDom
ldom2 ACTIVE ACTIVE (RUNNING) s0012 ORCL,SPARC-S7-2 11 4.0.0.1.15.0 (U4) 0 0 1 4.0 0 Imported GDom
Operation, Migration, Patch/Update and Monitoring using VDCF is as easy as importing with its consistent CLI.
Check out the VDCF demo
videos about other powerful features
https://www.youtube.com/user/JomaSoftVideo
Very cool
ReplyDelete