18 November 2022

Solaris 11.4 SRU51 (Nov 2022) - virtinfo enhancements for Solaris Zones

In the past virtinfo provided only minimal information about the underlying global zone.

root@marcel49:~# uname -a
SunOS marcel49 5.11 11.4.47.119.2 sun4v sparc sun4v non-global-zone

root@marcel49:~# virtinfo get all
NAME            CLASS       PROPERTY VALUE
non-global-zone current     -        -
logical-domain  parent      -        -
non-global-zone unsupported status   not supported in non-global-zone
kernel-zone     unsupported status   not supported in non-global-zone
logical-domain  unsupported status   not supported in non-global-zone


After upgrade to SRU51, we have details from the parent.

Remark: I modified our serial number in the output below.
The serial is helpful if you need to open a case with Oracle Support.

root@marcel49:~# uname -a
SunOS marcel49 5.11 11.4.51.132.1 sun4v sparc sun4v non-global-zone

root@marcel49:~# virtinfo get all
NAME            CLASS       PROPERTY              VALUE
non-global-zone current     zonename              marcel49
non-global-zone current     chassis-serial-number AK99999999
non-global-zone current     parent-hostname       g0049.jomasoft-lab.ch
logical-domain  parent      -                     -
non-global-zone unsupported status                not supported in non-global-zone
kernel-zone     unsupported status                not supported in non-global-zone
logical-domain  unsupported status                not supported in non-global-zone

Output of values is also supported

root@marcel49:~# virtinfo -c current get -H -o value parent-hostname
g0049.jomasoft-lab.ch

15 November 2022

New Features in Solaris 11.4 SRU51 (November 2022)

Another quarterly Solaris SRU including new features

Live Memory Reconfiguration for Kernel Zones on x86
Setting disk IDs for Kernel Zone live storage migration
Propagating hosting environment information in to zones
zpool/zfs -o atime/pathname
/dev/full device
mkfile size argument update
FOSS: GCC 12, perl 5.36, Unbound DNS server
EOF: hal-cups-utils

Stay tuned for more blogs about this new features

01 November 2022

How easy is it to import OpsCenter created systems into JomaSoft VDCF ?

Ops Center is Oracle's management and monitoring tool for Oracle Solaris.
Premier Support for the current version 12.4 ends April 2024.

JomaSoft VDCF is a lightweight CLI management and monitoring tool with similar features.
VDCF could be an alternative for customers with focus beyond April 2024.

Find more information about VDCF
https://www.jomasoft.ch/vdcf/


This blog shows how easy it is to integrate systems created using OpsCenter into VDCF.
After the import VDCF can be used to operate, manage, update and monitor this systems.


We will import 3 systems into VDCF. First the SPARC S7 control domain and then two LDoms
where one is running Solaris 10 and the second is running Solaris 11.4. The process is done in a few minutes. It is required to execute 2 commands on each system to install the JomaSoft VDCF Client pkg and add the required ssh key.


Step 1 / Install VDCF pkg and key on each target system

# yes | pkgadd -d http://vdcf/pkg/$(uname -p)/JSvdcf-client.pkg all


## Downloading...
..............25%..............50%..............75%..............100%

## Download Complete

Processing package instance <JSvdcf-client> from <http://vdcf/pkg/sparc/JSvdcf-client.pkg>

JomaSoft VDCF - Client(sparc) 8.3.5

Copyright (c) 2005-2022 JomaSoft GmbH
All rights reserved.

Use is subject to license terms.

## Executing checkinstall script.

Using </opt/jomasoft/vdcf> as the package base directory.

## Processing package information.

## Processing system information.
## Verifying disk space requirements.

## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.


This package contains scripts which will be executed with super-user
permission during the process of installing this package.


Do you want to continue with the installation of <JSvdcf-client> [y,n,?]

Installing JomaSoft VDCF - Client as <JSvdcf-client>


## Installing part 1 of 1.
/opt/jomasoft/vdcf/client/conf/patch_kernel.cfg

/opt/jomasoft/vdcf/client/pkgs/JSvdcf-sync.pkg

/opt/jomasoft/vdcf/client/release

/opt/jomasoft/vdcf/client/rexec/asr_mgr_config

/opt/jomasoft/vdcf/client/rexec/cdom_config

<snip>

/opt/jomasoft/vdcf/client/smf/vdcf_iscsi.xml

/opt/jomasoft/vdcf/client/smf/zfs_encryption_load_key.xml/opt/jomasoft/vdcf/client/smf/zfs_on_nfs.xml

[ verifying class <none> ]

## Executing postinstall script.

*** MANUAL-TASK TODO ***

Add Public Key of root at Management Server to vdcfexec authorized_keys using :

/opt/jomasoft/vdcf/client/sbin/update_key -u <FLASH_WEBSERVER_URL>

Installation of <JSvdcf-client> was successful.

# /opt/jomasoft/vdcf/client/sbin/update_key -u http://vdcf

Obtaining public key ... done.

SSH Key updated successfully.


Step 2 / Import the systems into VDCF

The step to integrate the systems into the VDCF tool is by executing a single simple node import command on the VDCF central management server.

First we have to import the control domain.

-bash-5.1$ time node -c import name=s0012

Importing new Node s0012 ...

Warning: Permanently added 's0012,192.168.100.12' (ED25519) to the list of known hosts.
Discover Systeminfo ...

Discover Rootdiskinfo ...

Discover Diskinfo ...

This may take some time, it depends on the number of disks

.......................

Discover Netinfo ...

Node configuration successfully added.

WARN: No matching build found. Patchlevel 4.48.0.1.126.1 set as build name.
System registration done for s0012.
registering disks from node s0012
New visible Lun 60002AC000000000000004AF0001507B Size: 30.00 GB

New visible Lun 60002AC000000000000004C00001507B Size: 30.00 GB

New visible Lun 60002AC000000000000004C20001507B Size: 30.00 GB

New visible Lun 60002AC000000000000004C80001507B Size: 30.00 GB

New visible Lun 60002AC000000000000004CC0001507B Size: 30.00 GB

Registered BootDisk of Node s0012: 60002AC000000000000004D20001507B Size: 50.00 GB

Registered new Lun: 60002AC000000000000004E00001507B Size: 30.00 GB

Registered new Lun: 60002AC000000000000004E10001507B Size: 30.00 GB

Found Control Domain s0012 (primary).

Found new control domain on 's0012'

importing node datasets from node s0012

Root Dataset s0012_root (ZPOOL: rpool) with Size 49.79 GB successfully imported from Node s0012

Successfully added node filesystem 'ROOT/s11.4.48.0.1.126.1' with mountpoint '/' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'ROOT/s11.4.48.0.1.126.1/var' with mountpoint '/var' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'VARSHARE' with mountpoint '/var/share' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'VARSHARE/tmp' with mountpoint '/var/tmp' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'VARSHARE/kvol' with mountpoint '/var/share/kvol' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'VARSHARE/zones' with mountpoint '/system/zones' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'VARSHARE/cores' with mountpoint '/var/share/cores' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'VARSHARE/crash' with mountpoint '/var/share/crash' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'export/home' with mountpoint '/export/home' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'export/home/admin' with mountpoint '/export/home/admin' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'guests' with mountpoint '/guests' (ZPOOL: rpool) to dataset 's0012_root'

Successfully added node filesystem 'VARSHARE/sstore' with mountpoint '/var/share/sstore/repo' (ZPOOL: rpool) to dataset 's0012_root'

No vServer found on Node s0012.

WARN: Add console configuration manually using: console -c add node=s0012

Node s0012 import finished

real 1m18.610s

user 0m15.031s
sys 0m56.736s


Now we can import the ldom1 with Solaris 10 and ldom2 with Solaris 11.4


-bash-5.1$ time node -c import name=ldom1

Importing new Node ldom1 ...

Warning: Permanently added 'ldom1,192.168.20.201' (RSA) to the list of known hosts.
Discover Systeminfo ...

psrinfo: Physical processor view not supported

Discover Rootdiskinfo ...

Discover Diskinfo ...

This may take some time, it depends on the number of disks

Discover Netinfo ...
Discovered Node ldom1 as Guest Domain ldom1 on Control Domain s0012

Importing Guest Domain ldom1 ...
Discovering CDom s0012 ...
Found Control Domain s0012 (primary).

using partitioning.cfg as profile for root disk.

GDom ldom1 (Imported GDom) is created.

Commit guest domain to update configuration

AutoBoot for GDom ldom1 set to false

Network 'management' for GDom 'ldom1' defined

Assigned disk 60002AC000000000000004AF0001507B as ROOTDISK to GDom ldom1

committing guest domain <ldom1>

Updating ActiveBuild from '-' to 'imported' for Node ldom1

System registration done for ldom1.

node with all vservers being checked: ldom1

check on node ldom1 successful

patch deployment updated from node ldom1

registering disks from node ldom1

importing node datasets from node ldom1

Root Dataset ldom1_root (ZPOOL: rpool) with Size 29.79 GB successfully imported from Node ldom1

Successfully added node filesystem 'ROOT/_SUNWCAT_S' with mountpoint '/' (ZPOOL: rpool) to dataset 'ldom1_root'

Successfully added node filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to dataset 'ldom1_root'

Successfully added node filesystem 'export/home' with mountpoint '/export/home' (ZPOOL: rpool) to dataset 'ldom1_root'

No vServer found on Node ldom1.

Node ldom1 import finished

real 1m3.560s

user 0m13.010s
sys 0m37.529s


-bash-5.1$ time node -c import name=ldom2

Importing new Node ldom2 ...

Warning: Permanently added 'ldom2,192.168.20.202' (ED25519) to the list of known hosts.
Discover Systeminfo ...

Discover Rootdiskinfo ...

Discover Diskinfo ...

This may take some time, it depends on the number of disks

Discover Netinfo ...
Discovered Node ldom2 as Guest Domain ldom2 on Control Domain s0012

Importing Guest Domain ldom2 ...

Discovering CDom s0012 ...

Found Control Domain s0012 (primary).
using partitioning.cfg as profile for root disk.

GDom ldom2 (Imported GDom) is created.

Commit guest domain to update configuration

AutoBoot for GDom ldom2 set to false

Network 'management' for GDom 'ldom2' defined

Assigned disk 60002AC000000000000004E00001507B as ROOTDISK to GDom ldom2

committing guest domain <ldom2>

WARN: No matching build found. Patchlevel 4.0.0.1.15.0 set as build name.

Updating ActiveBuild from '-' to '4.0.0.1.15.0' for Node ldom2

System registration done for ldom2.

registering disks from node ldom2

importing node datasets from node ldom2

Root Dataset ldom2_root (ZPOOL: rpool) with Size 29.79 GB successfully imported from Node ldom2

Successfully added node filesystem 'ROOT/solaris' with mountpoint '/' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'ROOT/solaris/var' with mountpoint '/var' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'VARSHARE' with mountpoint '/var/share' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'VARSHARE/tmp' with mountpoint '/var/tmp' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'VARSHARE/kvol' with mountpoint '/var/share/kvol' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'VARSHARE/zones' with mountpoint '/system/zones' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'export/home' with mountpoint '/export/home' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'export/home/admin' with mountpoint '/export/home/admin' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'guests' with mountpoint '/guests' (ZPOOL: rpool) to dataset 'ldom2_root'

Successfully added node filesystem 'VARSHARE/sstore' with mountpoint '/var/share/sstore/repo' (ZPOOL: rpool) to dataset 'ldom2_root'
No vServer found on Node ldom2.
Node ldom2 import finished

real 1m20.849s
user 0m18.075s

sys 0m51.073s


After this successful import, we can use VDCF's gdom command to manage this two ldoms.

-bash-5.1$ gdom -c show cdom=s0012

Name cState rState CDom Model OS Patch-Level Cores Max-Cores VCPUs RAM/GB #V Comment

ldom1 ACTIVE ACTIVE (RUNNING) s0012 ORCL,SPARC-S7-2 10 150400-40 (U11+) 0 0 1 4.0 0 Imported GDom

ldom2 ACTIVE ACTIVE (RUNNING) s0012 ORCL,SPARC-S7-2 11 4.0.0.1.15.0 (U4) 0 0 1 4.0 0 Imported GDom


Operation, Migration, Patch/Update and Monitoring using VDCF is as easy as importing with its consistent CLI.

Check out the VDCF demo videos about other powerful features
https://www.youtube.com/user/JomaSoftVideo


28 October 2022

Using Ops Center on current Solaris 11.4 SRUs is a fight

Ops Center causes troubles, because it uses old software versions,
which are not available anymore by default on current Solaris 11.4 SRUs.

It requires python 2.7 & perl 5.22


On Control Domains using SRU48 or later you even need to open a Oracle Support Service Request to download a required Agent bugfix.


Here a few MOS Doc Ids with details about the workarounds


Ops Center 12.4: CDOM Agents fail to start after a Solaris upgrade to 11.4 SRU 48 (Doc ID 2892465.1)

Ops Center 12.4 upgrades to Solaris 11.4 SRU 39 on an EC will fail (Doc ID 2826475.1)

Ops Center Will Not Start After Upgrading to Solaris 11.4.3- SRU 30 - Svc:/application/scn/ajaxterm:default is Restarting Too Quickly (Doc ID 2760685.1)

Ops Center: Running With Solaris 11.4 SRU21 or Higher Precautions (Doc ID 2783309.1)


If you are happy with CLI tools, the JomaSoft VDCF framework is an alternative management software.
https://www.jomasoft.ch/vdcf/



05 October 2022

Oracle Systems Engineering Forum 2022

Join the Oracle Systems Engineering Forum 2022 (online on zoom)

Areas: Oracle Cloud @ customer, Oracle ZFS, Oracle ZDLRA, Oracle SPARC and Solaris

Dates: Nov 1st and 2nd

Register here


31 August 2022

Support State of Oracle SPARC T5-2 Server

The SPARC T5-2 is a very successful server, but a bit old now.
Oracle shipped the last T5-2s five years ago in 2017.


Oracle still supports this server, but after 5 years after Last Ship
replacement parts delivery is not  guaranteed anymore.


But there are newer generations of Oracle SPARC servers available
with faster CPUs and Memory and less use of power. There are various
options to migrate the existing Ldoms, Zones or Apps to the new servers.


Take a look at the new SPARC servers at
https://www.jomasoft.ch/hardware/


17 August 2022

New Features in Solaris 11.4 SRU48 (August 2022)

Another quarterly Solaris "feature" SRU including

Kernel Zones: Live storage migration
New defaults for Auditing
quota check removed from /etc/profile
zfs list/get -u (unsorted)
Memory Reservation Pool for OSM
svcs ISO standard date format
Python 2.7 Freeze/Remove admin helper scripts
Node.js 18 for Solaris x86
Configurable ldmd certificate locations for LDoms
Ruby 3.1

and many, many FOSS updates

Stay tuned for more blogs about this new features