18 January 2022

Current State and Updates for Oracle Solaris 01/2022

Oracle Solaris 11.4 is supported by Oracle at *least* till 11/2034.
I'm not aware of any other Vendor with such a long term guarantee.

Today, 18th January 2022 the SRU41 (CPU JAN 2022) was released.
All the details on MOS in Doc 2433412.1
Every quarter a CPU (Critical Patch Update) is made available,
which is recommended to be installed on production systems.

Every month after the CPU a SRU with several new features
is released. Stay tunned for February 2022 !

Additionally Patches for Solaris 10 and ESU for Solaris 11.3
was made available. This updates are only available for Customers
with "Extended/Vintage Support"

Solaris 11.3 ESU 36.27   MOS 2759706.1
Solaris 10   a few new patches

This Extended Support for the older Solaris Releases ends on 01/2024.
Time to plan and prepare upgrades to Solaris 11.4.


22 December 2021

Oracle Increases Max Memory for SPARC T8 Servers by 2x

Oracle does now deliver 128 GB DIMM for the SPARC T8 and M8 Servers.

Using this new DIMMs every SPARC M8 CPU has now access to 2 TB of Memory.

All the details on the Oracle Blog

https://blogs.oracle.com/oracle-systems/post/announcing-new-enhancements-to-sparc-t8-and-m8-servers


02 December 2021

Adjust Time Quickly at Boot on Solaris Servers

On Solaris 11.4 NTP does update the time after boot.
By default this can take a few minutes, because the NTP client
waits for a few reply's of the NTP servers.

To update the time quickly it is recommended to use the 'burst iburst'
flags in the ntp.conf

server <ntpserverip> burst iburst

Using this configuration NTP adjusts the time a few seconds after start.
This is an important configuration if you start your apps or databases automatically.


25 October 2021

Performance Impact of a Large Solaris 11 IPS Repo

The content and size of a Solaris IPS Repo has impact on your update processes.
Of course more SRUs increase the size of the repository and the duration of package downloads and operations.

To compare the difference we have two repos on a SPARC S7 LDom.

A first one with 60 updates: all U4 SRUs and a few U3 SRUs

-bash-5.0$ ipsadm -c show_repo repository=http://192.168.20.75:8282 | grep entire@ | wc -l
60


The second repo only with U4 GA and latest SRU38

-bash-5.0$ ipsadm -c show_repo repository=http://192.168.20.75:8283 | grep entire@ | wc -l
2


When replacing the publisher on a target server the catalog is downloaded and analyzed.
Look at this huge difference


# time pkg set-publisher -G "*" -g http://192.168.20.75:8282 solaris

real 4m23.097s

user 4m9.363s
sys 0m12.151s


# time pkg set-publisher -G "*" -g http://192.168.20.75:8283 solaris

real 0m37.629s
user 0m34.968s

sys 0m2.287s


On a SPARC S7 LDom with 3 zones a pkg update -n (trial run) takes nearly
9 minutes with the large IPS Repo.


# time pkg update -n -C 5 --be-name u4.sru38 entire@11.4,5.11-11.4.38.0.1.101.6
...
Planning linked: 3/3 done

real 8m46.280s

user 19m25.699s
sys 0m38.675s


With the smaller repo less than 6 minutes.

Planning linked: 3/3 done

real 5m59.820s
user 12m59.500s

sys 0m27.611s


Summary
Re-create your IPS repo from time to time after updates when you for sure don't need older SRUs anymore.
But always add the GA version first, and then the required SRUs additionally.

13 August 2021

Don't remove Data Disks from Solaris Zpools if performance is important

Solaris 11.4 delivers a new feature to remove data disks from existing zpools.
zpool remove myzpool <disk>

We used this feature a few times without problems on test environments.

But it has performance impact if the removed disk had data on it.
Especially if read performance is important. We know customers with oracle databases
where latency around 1ms is expected. After removing a disk from a
large zpool the performance was terrible and the only solution was to
re-create the zpool.

Important to understand there is an expected performance impact while the disk is removing.
Sure. The data needs to be copied to the remaining disks.
But even after the remove there can be a major performance impact when the data (from the removed disk) must be read, because the data copy added additional internal redirections.

The recommendation is to use this feature only after an accidentally add
of a disk to a wrong zpool. there is no performance impact if the
removed disk has no data.

Find the details of this recommendation in the Solaris 11.4 ZFS Manual:
https://docs.oracle.com/cd/E37838_01/html/E61017/remove-devices.html

 

To avoid such troubles we disabled the dataset -c remdisk feature for data disks
by default in VDCF Version 8.1.8

More about our VDCF Solaris Management product can be found on
https://www.jomasoft.ch/vdcf/


23 April 2021

Did You Know Oracle Solaris Includes Ksplice?

Look what we have here:

-bash-5.0$ pkg list ksplice
NAME (PUBLISHER)     VERSION                    IFO
system/ksplice       11.4-11.4.29.0.1.82.3      i--


Ksplice supports online Kernel Updates.

Oracle Support delivers in rare cases of Kernel issues
an IDR which are installed online using ksplice.

For a Solaris Admin such an IDR is handled like other IDRs.
It can be installed as usual with the pkg command.


Here a sample:

# pkg info -g ./idr4712.1.p5p idr4712
          Name: idr4712
       Summary: To back out This IDR : # /usr/bin/pkg uninstall -r idr4712
   Description: sparc IDR built for release : Solaris 11.4 SRU # 29.82.3
         State: Not installed
     Publisher: solaris
       Version: 1
        Branch: None
Packaging Date: February 12, 2021 at 10:22:38 AM
          Size: 4.08 kB
          FMRI: pkg://solaris/idr4712@1:20210212T102238Z


-bash-5.0$ pkg list -g ./idr4712.1.p5p -af
NAME (PUBLISHER)         VERSION                      IFO
idr4712                  1                            ---
system/kernel/platform   11.4-11.4.29.0.1.82.3.4712.1 ---
system/ksplice           11.4-11.4.29.0.1.82.3.4712.1 ---
system/osnet-splice      11.4-11.4.29.0.1.82.3.4712.1 ---


# pkg set-publisher -g file:///var/tmp/idr4712.1.p5p solaris

# pkg install idr4712
          Packages to install:   2
            Packages to update:   2
            Services to change:   3
       Create boot environment:  No
Create backup boot environment: Yes

..
..
..


Using spliceadm you can verify the installed splices.

# spliceadm
ID        STATE        CVE             BUGID
471201    applied      N/A             32407818


in case of a problem you can even revert the fix

# spliceadm reverse 471201
Splice 471201 reversed successfully on Fri Apr 23 13:15:20.

# spliceadm status
ID        STATE        CVE             BUGID
471201    not-applied  N/A             32407818


Another powerful and easy to use Solaris Feature