For Solaris workloads Oracle Cloud Infrastructure (OCI)
offers multiple options: Bare Metal and Virtual Machine
on AMD or Intel CPUs.
With Virtual Machines you are flexible with Memory and CPU
Configurations. From 1 OCPU (core) up to 64 and up to 1 TB RAM.
Depending on your load you can change the size of the VM, but
you need to reboot to activate the VM with the new settings.
Bare Metal have fix sizes from 512 GB RAM up to 2 TB and
between 36 OCPU up to 160 OCPU. The available shapes
may be limited on your cloud region.
Bare Metal have currently the limitation you can't attach
UHP (Ultra High Performance) block volumes.
For both types Bare Metal and VM you are not charged if you shutdown
your instance.
My current favorite is the VM type because of the flexibility to resize.
So far I did not encounter high performance impact by the Virtualization layer.
16 December 2022
Solaris X86 on OCI BareMetal or Virtual Machine?
12 December 2022
Solaris 11.4 x86 in Oracle Cloud OCI VM
Oracle currently offers the Oracle Cloud in 40 regions.
One of the many options available is a Solaris 11.4 x86 Virtual Machine.
You can create a VM with up to 1TB RAM and 64 OCPUs (cores).
The creation is very easy and the setup is done in a few minutes.
How to create a VM can be found on the Oracle Solaris Blog
The OS management is currently not available to monitor the Solaris instances.
The commercial JomaSoft VDCF tool is an option to be used for monitoring.
I/O performance depends on the configuration of the block volumes and on
the CPU resources used by the Virtual Machine.
But even with 16 cores I never reached more than 100MB/s
18 November 2022
Solaris 11.4 SRU51 (Nov 2022) - virtinfo enhancements for Solaris Zones
In the past virtinfo provided only minimal information about the underlying global zone.
root@marcel49:~# uname -a
SunOS marcel49 5.11 11.4.47.119.2 sun4v sparc sun4v non-global-zone
root@marcel49:~# virtinfo get all
NAME CLASS PROPERTY VALUE
non-global-zone current - -
logical-domain parent - -
non-global-zone unsupported status not supported in non-global-zone
kernel-zone unsupported status not supported in non-global-zone
logical-domain unsupported status not supported in non-global-zone
After upgrade to SRU51, we have details from the parent.
Remark: I modified our serial number in the output below.
The serial is helpful if you need to open a case with Oracle Support.
root@marcel49:~# uname -a
SunOS marcel49 5.11 11.4.51.132.1 sun4v sparc sun4v non-global-zone
root@marcel49:~# virtinfo get all
NAME CLASS PROPERTY VALUE
non-global-zone current zonename marcel49
non-global-zone current chassis-serial-number AK99999999
non-global-zone current parent-hostname g0049.jomasoft-lab.ch
logical-domain parent - -
non-global-zone unsupported status not supported in non-global-zone
kernel-zone unsupported status not supported in non-global-zone
logical-domain unsupported status not supported in non-global-zone
Output of values is also supported
root@marcel49:~# virtinfo -c current get -H -o value parent-hostname
g0049.jomasoft-lab.ch
15 November 2022
New Features in Solaris 11.4 SRU51 (November 2022)
Another quarterly Solaris SRU including new features
Live Memory Reconfiguration for Kernel Zones on x86
Setting disk IDs for Kernel Zone live storage migration
Propagating hosting environment information in to zones
zpool/zfs -o atime/pathname
/dev/full device
mkfile size argument update
FOSS: GCC 12, perl 5.36, Unbound DNS server
EOF: hal-cups-utils
Stay tuned for more blogs about this new features
01 November 2022
How easy is it to import OpsCenter created systems into JomaSoft VDCF ?
Ops Center is Oracle's
management and monitoring tool for Oracle Solaris.
Premier Support for the
current version 12.4 ends April 2024.
VDCF could be an alternative for customers with focus beyond April 2024.
Find more information
about VDCF
https://www.jomasoft.ch/vdcf/
After the import VDCF can be used to operate, manage, update and monitor this systems.
where one is running Solaris 10 and the second is running Solaris 11.4. The process is done in a few minutes. It is required to execute 2 commands on each system to install the JomaSoft VDCF Client pkg and add the required ssh key.
On Solaris 11.4 SRU60 (Aug 2023) and later we need to use 2 commands
# wget http://vdcf/pkg/$(uname -p)/JSvdcf-client.pkg
# yes | pkgadd -d ./JSvdcf-client.pkg all
On older Solaris versions one command works
# yes | pkgadd -d http://vdcf/pkg/$(uname -p)/JSvdcf-client.pkg all
..............25%..............50%..............75%..............100%
## Download Complete
Processing package instance <JSvdcf-client> from <http://vdcf/pkg/sparc/JSvdcf-client.pkg>
JomaSoft VDCF -
Client(sparc) 8.3.5
All rights reserved.
Use is subject to license terms.
## Executing checkinstall script.
Using </opt/jomasoft/vdcf> as the package base directory.
## Processing package information.
## Processing system information.## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
permission during the process of installing this package.
Installing JomaSoft VDCF - Client as <JSvdcf-client>
/opt/jomasoft/vdcf/client/conf/patch_kernel.cfg
/opt/jomasoft/vdcf/client/pkgs/JSvdcf-sync.pkg
/opt/jomasoft/vdcf/client/release
/opt/jomasoft/vdcf/client/rexec/asr_mgr_config
/opt/jomasoft/vdcf/client/rexec/cdom_config
<snip>
/opt/jomasoft/vdcf/client/smf/vdcf_iscsi.xml
/opt/jomasoft/vdcf/client/smf/zfs_encryption_load_key.xml/opt/jomasoft/vdcf/client/smf/zfs_on_nfs.xml[ verifying class <none> ]
## Executing postinstall script.*** MANUAL-TASK TODO ***
/opt/jomasoft/vdcf/client/sbin/update_key -u <FLASH_WEBSERVER_URL>
Installation of <JSvdcf-client> was successful.
#
/opt/jomasoft/vdcf/client/sbin/update_key -u http://vdcf
Obtaining public key ...
done.
Step 2 / Import the systems into VDCF
The step to integrate the systems into the VDCF tool is by executing a single simple node import command on the VDCF central management server.
First we have to import the control domain.
-bash-5.1$ time node -c import name=s0012
Importing new Node s0012 ...
Warning: Permanently added 's0012,192.168.100.12' (ED25519) to the list of known hosts.Discover Systeminfo ...
Discover Rootdiskinfo ...
Discover Diskinfo ...
This may take some time, it depends on the number of disks
.......................
Discover Netinfo ...
Node configuration successfully added.
WARN: No matching build
found. Patchlevel 4.48.0.1.126.1 set as build name.
System registration done
for s0012.
registering disks from
node s0012
New visible Lun
60002AC000000000000004AF0001507B Size: 30.00 GB
New visible Lun
60002AC000000000000004C00001507B Size: 30.00 GB
New visible Lun
60002AC000000000000004C20001507B Size: 30.00 GB
New visible Lun
60002AC000000000000004C80001507B Size: 30.00 GB
New visible Lun
60002AC000000000000004CC0001507B Size: 30.00 GB
Registered BootDisk of
Node s0012: 60002AC000000000000004D20001507B Size: 50.00 GB
Registered new Lun:
60002AC000000000000004E00001507B Size: 30.00 GB
Registered new Lun:
60002AC000000000000004E10001507B Size: 30.00 GB
Found Control Domain s0012
(primary).
Found new control domain
on 's0012'
importing node datasets
from node s0012
Root Dataset s0012_root
(ZPOOL: rpool) with Size 49.79 GB successfully imported from Node
s0012
Successfully added node
filesystem 'ROOT/s11.4.48.0.1.126.1' with mountpoint '/' (ZPOOL:
rpool) to dataset 's0012_root'
Successfully added node
filesystem 'ROOT/s11.4.48.0.1.126.1/var' with mountpoint '/var'
(ZPOOL: rpool) to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE' with mountpoint '/var/share' (ZPOOL: rpool) to
dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/tmp' with mountpoint '/var/tmp' (ZPOOL: rpool)
to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/kvol' with mountpoint '/var/share/kvol' (ZPOOL:
rpool) to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/zones' with mountpoint '/system/zones' (ZPOOL:
rpool) to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/cores' with mountpoint '/var/share/cores'
(ZPOOL: rpool) to dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/crash' with mountpoint '/var/share/crash'
(ZPOOL: rpool) to dataset 's0012_root'
Successfully added node
filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to
dataset 's0012_root'
Successfully added node
filesystem 'export/home' with mountpoint '/export/home' (ZPOOL:
rpool) to dataset 's0012_root'
Successfully added node
filesystem 'export/home/admin' with mountpoint '/export/home/admin'
(ZPOOL: rpool) to dataset 's0012_root'
Successfully added node
filesystem 'guests' with mountpoint '/guests' (ZPOOL: rpool) to
dataset 's0012_root'
Successfully added node
filesystem 'VARSHARE/sstore' with mountpoint '/var/share/sstore/repo'
(ZPOOL: rpool) to dataset 's0012_root'
No vServer found on Node
s0012.
WARN: Add console configuration manually using: console -c add node=s0012
Node s0012 import finishedreal 1m18.610s
sys 0m56.736s
Now we can import the ldom1 with Solaris 10 and ldom2 with Solaris 11.4
Importing new Node ldom1 ...
Warning: Permanently added 'ldom1,192.168.20.201' (RSA) to the list of known hosts.Discover Systeminfo ...
psrinfo: Physical processor view not supported
Discover Rootdiskinfo ...
Discover Diskinfo ...
This may take some time, it depends on the number of disks
Discover Netinfo ...
Discovered Node ldom1 as
Guest Domain ldom1 on Control Domain s0012
Discovering CDom s0012 ...
Found Control Domain s0012 (primary).
using partitioning.cfg as profile for root disk.
GDom ldom1 (Imported GDom) is created.
Commit guest domain to update configuration
AutoBoot for GDom ldom1 set to false
Network 'management' for GDom 'ldom1' defined
Assigned disk 60002AC000000000000004AF0001507B as ROOTDISK to GDom ldom1
committing guest domain <ldom1>
Updating ActiveBuild from '-' to 'imported' for Node ldom1
System registration done for ldom1.
node with all vservers being checked: ldom1
check on node ldom1 successful
patch deployment updated from node ldom1
registering disks from node ldom1
importing node datasets from node ldom1
Root Dataset ldom1_root (ZPOOL: rpool) with Size 29.79 GB successfully imported from Node ldom1
Successfully added node filesystem 'ROOT/_SUNWCAT_S' with mountpoint '/' (ZPOOL: rpool) to dataset 'ldom1_root'
Successfully added node filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to dataset 'ldom1_root'
Successfully added node filesystem 'export/home' with mountpoint '/export/home' (ZPOOL: rpool) to dataset 'ldom1_root'
No vServer found on Node ldom1.
Node ldom1 import finished
real 1m3.560s
sys 0m37.529s
-bash-5.1$ time node -c import name=ldom2
Importing new Node ldom2 ...
Warning: Permanently added 'ldom2,192.168.20.202' (ED25519) to the list of known hosts.Discover Systeminfo ...
Discover Rootdiskinfo ...
Discover Diskinfo ...
This may take some time, it depends on the number of disks
Discover Netinfo ...
Discovered Node ldom2 as
Guest Domain ldom2 on Control Domain s0012
Importing Guest Domain
ldom2 ...
Discovering CDom s0012 ...
using partitioning.cfg as profile for root disk.
GDom ldom2 (Imported GDom) is created.
Commit guest domain to update configuration
AutoBoot for GDom ldom2 set to false
Network 'management' for GDom 'ldom2' defined
Assigned disk 60002AC000000000000004E00001507B as ROOTDISK to GDom ldom2
committing guest domain <ldom2>
WARN: No matching build found. Patchlevel 4.0.0.1.15.0 set as build name.
Updating ActiveBuild from '-' to '4.0.0.1.15.0' for Node ldom2
System registration done for ldom2.
registering disks from node ldom2
importing node datasets from node ldom2
Root Dataset ldom2_root (ZPOOL: rpool) with Size 29.79 GB successfully imported from Node ldom2
Successfully added node filesystem 'ROOT/solaris' with mountpoint '/' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'ROOT/solaris/var' with mountpoint '/var' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE' with mountpoint '/var/share' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE/tmp' with mountpoint '/var/tmp' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE/kvol' with mountpoint '/var/share/kvol' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE/zones' with mountpoint '/system/zones' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'export' with mountpoint '/export' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'export/home' with mountpoint '/export/home' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'export/home/admin' with mountpoint '/export/home/admin' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'guests' with mountpoint '/guests' (ZPOOL: rpool) to dataset 'ldom2_root'
Successfully added node filesystem 'VARSHARE/sstore' with mountpoint '/var/share/sstore/repo' (ZPOOL: rpool) to dataset 'ldom2_root'No vServer found on Node ldom2.
Node ldom2 import finished
real 1m20.849s
user 0m18.075s
-bash-5.1$ gdom -c show cdom=s0012
Name cState rState CDom Model OS Patch-Level
Cores
Max-Cores VCPUs RAM/GB #V Comment
ldom1 ACTIVE ACTIVE (RUNNING) s0012 ORCL,SPARC-S7-2 10 150400-40 (U11+) 0 0 1 4.0 0 Imported GDom
ldom2 ACTIVE ACTIVE (RUNNING) s0012 ORCL,SPARC-S7-2 11 4.0.0.1.15.0 (U4) 0 0 1 4.0 0 Imported GDom
Operation, Migration, Patch/Update and Monitoring using VDCF is as easy as importing with its consistent CLI.
Check out the VDCF demo
videos about other powerful features
https://www.youtube.com/user/JomaSoftVideo
28 October 2022
Using Ops Center on current Solaris 11.4 SRUs is a fight
Ops Center causes
troubles, because it uses old software versions,
which are not
available anymore by default on current Solaris 11.4 SRUs.
Ops Center 12.4 upgrades to Solaris 11.4 SRU 39 on an EC will fail (Doc ID 2826475.1)
Ops Center Will Not Start After Upgrading to Solaris 11.4.3- SRU 30 - Svc:/application/scn/ajaxterm:default is Restarting Too Quickly (Doc ID 2760685.1)
Ops Center: Running With Solaris 11.4 SRU21 or Higher Precautions (Doc ID 2783309.1)
If you are happy with CLI
tools, the JomaSoft VDCF framework is an alternative management
software.
https://www.jomasoft.ch/vdcf/
05 October 2022
Oracle Systems Engineering Forum 2022
Join the Oracle Systems Engineering Forum 2022 (online on zoom)
Areas: Oracle Cloud @ customer,
Oracle ZFS, Oracle ZDLRA, Oracle SPARC and Solaris
Dates: Nov 1st and 2nd
31 August 2022
Support State of Oracle SPARC T5-2 Server
The SPARC T5-2 is a very
successful server, but a bit old now.
Oracle shipped the last
T5-2s five years ago in 2017.
replacement parts delivery is not guaranteed anymore.
with faster CPUs and Memory and less use of power. There are various
options to migrate the existing Ldoms, Zones or Apps to the new servers.
https://www.jomasoft.ch/hardware/
17 August 2022
New Features in Solaris 11.4 SRU48 (August 2022)
Another quarterly Solaris "feature" SRU including
Kernel Zones: Live storage migration
New defaults for Auditing
quota check removed from /etc/profile
zfs list/get -u (unsorted)
Memory Reservation Pool for OSM
svcs ISO standard date format
Python 2.7 Freeze/Remove admin helper scripts
Node.js 18 for Solaris x86
Configurable ldmd certificate locations for LDoms
Ruby 3.1
and many, many FOSS updates
Stay tuned for more blogs about this new features
12 August 2022
PostgreSQL - IPS packages available for Solaris 11.4 SPARC
On our JomaSoft Website we launched a download page for Open Source software
https://www.jomasoft.ch/downloads/#js-opensource
You can find current PostgreSQL packages
versions 13.8, 14.5 and 15 Beta3
23 May 2022
Solaris 11.4 SRU45 (May 2022) - zstd compression utility
Solaris 11.4.45 includes now the zstd compression utility.
-bash-5.1$ zstd -V
*** zstd command line interface 64-bits v1.5.0, by Yann Collet ***
To compare I have a Oracle DB19c home as a tar file.
I executed this tests on a SPARC S7 LDom with 1 core
# ls -lh 19c.tar
-rw-r--r-- 1 root root 6.66G May 23 17:36 19c.tar
zstd does a higher compression compared to lz4, but takes
more time.
# time zstd 19c.tar -o 19c.tar.zstd
19c.tar : 39.24% (7153274368 => 2806612298 bytes, 19c.tar.zstd)
real 3m26.408s
user 3m29.804s
sys 0m6.698s
# time lz4 19c.tar
Compressed filename will be : 19c.tar.lz4
Compressed 7153274368 bytes into 3629706666 bytes ==> 50.74%
real 2m35.092s
user 2m25.227s
sys 0m6.852s
# ls -lh 19c.tar.lz4 19c.tar.zstd
-rw-r--r-- 1 root root 3.38G May 23 17:36 19c.tar.lz4
-rw-r--r-- 1 root root 2.61G May 23 18:14 19c.tar.zstd
With decompression zstd is a bit faster than lz4.
# time unlz4 19c.tar.lz4
Decoding file 19c.tar
19c.tar.lz4 : decoded 7153274368 bytes
real 2m9.913s
user 1m23.008s
sys 0m8.339s
# time unzstd 19c.tar.zstd
19c.tar.zstd : 7153274368 bytes
real 1m52.263s
user 1m1.836s
sys 0m9.236s
18 May 2022
New Features in Solaris 11.4 SRU45 (May 2022)
Every month after the CPU (Critical Patch Update) Oracle releases
a new feature Release for Oracle Solaris.
In May 2022 it is again a BIG one
Kernel Zone Memory Live Reconfiguration (MLR) on SPARC
ZFS File Retention
vmstat uses sstore for history
packet filter (PF) filewall for Solaris 10 Branded Zones
Zstandard (zstd) fast compression utility
NTP Monitor only mode (new SMF ntp:monitor)
PHP: version 8.1 added and version 7.3 removed
Puppet: version 6.26 added / version 5 and Puppet Master removed
and many, many more FOSS updates
Stay tuned for more blogs about this new features
16 May 2022
How to Migrate from Solaris 10 to Solaris 11
Solaris 10 was released in 2005. Currently Solaris 10 is in extended Support
and Oracle is planing to provide patches till 01/2024.
Great News About Extended Support for Oracle Solaris 10 OS
There is no easy upgrade from Solaris 10 to 11 because Solaris 11
includes many new features not available in Solaris 10.
best option
At our customers we install new Solaris 11 systems or LDoms and install
the application, do the configuration and move the data from Solaris 10.
compromise option
If your application is not supported on Solaris 11 or you want to migrate
away from older hardware in short time. Then your option is Solaris 10 Branded
Zones. You can install Solaris 10 Branded Zones on a Solaris 11 server
based on a 1:1 archive from your Solaris 10 System.
Lift and Shift Guide - Migrating Workloads from Oracle Solaris 10 SPARC Systems to Oracle Solaris 10 Branded Zones
next step: sysdiff
If you already have a Solaris 10 Branded Zone, you can use the new sysdiff tool
to identify application binaries and files and convert this into a Solaris 11 ips package.
This ips package can then be deployed on a Solaris 11 Server.
sysdiff: moving Oracle Solaris 10 legacy 3rd party apps to 11.4
If you need support for such migrations just contact our experienced consultants at JomaSoft
https://www.jomasoft.ch/about/#team
02 March 2022
Solaris 11.4 SRU42 / Non-Production Personal Use Update
For developers Solaris 11.4 GA was made available in 2018.
Updates/SRUs are (only) delivered to customers with support contract.
Since Oracle changed to the continuous delivery model there was
no update for developers since 42 months.
This changed today !!
The public Solaris IPS repository contains the SRU42
http://pkg.oracle.com/solaris/release/en/index.shtml
More details on:
https://blogs.oracle.com/solaris/post/announcing-the-first-oracle-solaris-114-cbe
25 February 2022
Solaris 11.4 SRU42 (Feb 2022) - /var/share/cores FS and coreadm defaults
/var/share/cores is now a separate filesystem
-bash-5.1$ zfs list rpool/VARSHARE/cores
NAME USED AVAIL REFER MOUNTPOINT
rpool/VARSHARE/cores 31K 22.1G 31K /var/share/cores
If you upgrade from a previous SRU to SRU42 or later,
cleanup /var/share/cores first, because the files
will be migrated to the new filesystem.
coreadm has now better defaults for the core file naming
root@marcel49:~# coreadm | head -3
global core file pattern: /var/cores/core.%z.%f.%u.%p
global core file content: default
kernel zone core file pattern: /var/cores/kzone.%z.%t
with this new core file pattern, the cores can be easier identified
and are less often overwritten.
root@marcel49:~# ls -lh /var/cores/core.marcel49.sleep.0.21198
-rw------- 1 root root 4.16M Feb 25 13:19 /var/cores/core.marcel49.sleep.0.21198
If the pattern was not configured before the upgrade to SRU42 or later
the new default is set automatically.
If you used another pattern before you upgraded to SRU42 or later
you can set it manually to the new standard/default.
coreadm -g /var/cores/core.%z.%f.%u.%p
23 February 2022
Solaris 11.4 SRU42 (Feb 2022) - LDom Migration Class 2
SRU42 introduced a new LDom Migration Class. This class
allows to do Cross CPU Live Migration between SPARC S7,M7 and M8
CPUs. LDoms using this migration-class2 can use the ADI features
of this new CPUs and still can Live Migrate between this modern
SPARC Servers.
To change the cpu-arch setting you need to shutdown and unbind
your LDom
# ldm stop g0061
# ldm unbind g0061
# ldm set-domain cpu-arch=migration-class2 g0061
# ldm bind g0061
# ldm start g0061
The JomaSoft VDCF Management Software recognises this new
cpu-arch with Version 8.2.2 or later.
Find out more about VDCF
https://www.jomasoft.ch/vdcf/
21 February 2022
Solaris 11.4 SRU42 (Feb 2022) - ansible
The Oracle Solaris 11.4 SRU42 (Feb 2022) delivers ansible 2.10.
It is not installed by default. To use ansible install this 3 packages
pkg install ansible jinja2 pyyaml
Then create your own small ansible config file
-bash-5.1$ cat .ansible.cfg
[defaults]
inventory=/export/home/marcel/ansible_hosts
and you are ready to start
-bash-5.1$ ansible --version
ansible 2.10.15.post0
config file = /export/home/marcel/.ansible.cfg
configured module search path = ['/export/home/marcel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.7/vendor-packages/ansible
executable location = /usr/bin/ansible
python version = 3.7.10 (default, Dec 22 2021, 02:24:16) [GCC 10.3.0]
The ansible Users Guide can be found here
https://docs.ansible.com/ansible/latest/user_guide/index.html
18 February 2022
Solaris 11.4 SRU42 (Feb 2022) - zpool status -s
zpool status -s does now report many details about
the allocations on the disks
-bash-5.1$ zpool status -s all v0144_data
pool: v0144_data
id: 13650650466694793377
state: ONLINE
scan: scrub repaired 0 in 1s with 0 errors on Thu Jan 20 08:24:49 2022
config:
NAME STATE READ WRITE CKSUM AUNIT LSIZE PSIZE SLOW RPAIR RSLVR ALLOC FREE %FULL
v0144_d ONLINE 0 0 0 - - - - - - 15.4G 34.3G 31.0
c1d1 ONLINE 0 0 0 512 512 512 - - - 7.26G 12.6G 36.5
c1d6 ONLINE 0 0 0 512 512 512 - - - 8.1G 21.6G 27.2
errors: No known data errors
16 February 2022
New Features in Solaris 11.4 SRU42 (Feb 2022)
Every month after the CPU (Critical Patch Update) Oracle releases
a new feature Release for Oracle Solaris.
In February 2022 it is a BIG one
Ansible 2.10
OpenSSL 3.0
ldm command enhancements
Ldom migration-class2 for SPARC T7/S7/T8
split -b
coreadm enhanced defaults
zpool -s flag
Apache 2.4.52
new FS /var/share/cores
and many, many FOSS updates
Stay tuned for more blogs about this new features
18 January 2022
Current State and Updates for Oracle Solaris 01/2022
Oracle Solaris 11.4 is supported by Oracle at *least* till 11/2034.
I'm not aware of any other Vendor with such a long term guarantee.
Today, 18th January 2022 the SRU41 (CPU JAN 2022) was released.
All the details on MOS in Doc 2433412.1
Every quarter a CPU (Critical Patch Update) is made available,
which is recommended to be installed on production systems.
Every month after the CPU a SRU with several new features
is released. Stay tunned for February 2022 !
Additionally Patches for Solaris 10 and ESU for Solaris 11.3
was made available. This updates are only available for Customers
with "Extended/Vintage Support"
Solaris 11.3 ESU 36.27 MOS 2759706.1
Solaris 10 a few new patches
This Extended Support for the older Solaris Releases ends on 01/2024.
Time to plan and prepare upgrades to Solaris 11.4.