OpenSolaris derived ZFS NAS/ SAN (OmniOS, OpenIndiana, Solaris and napp-it)

arryo

n00b
Joined
May 23, 2012
Messages
57
Hi Gea,

Have you used OmniOS as torrent server? Currently I'm using transmission, but the program hangs if the download speed is too fast. Do you recommend any good program that can handle fast download speed?

Thanks
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
Interestingly, the interface is significantly quicker now; for example clicking on ZFS Filesystems and the list populating in less than 3sec rather than about 15sec before.
A newer napp-it is faster than an older due optimized reading of ZFS properties.
Another reason may be enabled acceleration (read properties in the background), see toplevel menu near logout.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
Hi Gea,

Have you used OmniOS as torrent server? Currently I'm using transmission, but the program hangs if the download speed is too fast. Do you recommend any good program that can handle fast download speed?

Thanks

I have never tried one. You may use one on a Linux LX container.
 

ARNiTECT

Weaksauce
Joined
Aug 4, 2012
Messages
65
Fixed!
NFS backed VM CrystalDiskMark c: drive r:1000MB/s w:700MB/s
As my post in the STH VMware forum: I had messed up something in ESXi networking in my pursuit of 10Gbe.
 

ARNiTECT

Weaksauce
Joined
Aug 4, 2012
Messages
65
Hi again,

As mentioned above in my overly long post about 10G speeds: CPU usage seems high when idle: ESXi is now reporting average 30% for 2x vCPU and 45% for 1 vCPU (network & disk near 0%)
Other ESXi VMs are near 0% CPU when idle.

Pools are not currently being accessed: no VMs, file transfers, no jobs running, I have disabled: auto-service, iSCSI, SMB and NFS. It was also like this before my recent reinstall.
The little napp-it CPU monitor is green, most processes are near 0%, except for the 'busy (iostat): last 10s' is between 20-100% and typically about 40%.

The napp-it VM has 93 filesystems across 3 pools: 3x NVMe vmdk Z1 / 6x 8TB HDD RAID10 / 8x 3TB HDD Z2, 2x vmxnet3
Memory 48Gb, CPU is Xeon E-2278G 8-core, base 3.4Ghz, boost 5Ghz

What is it processing? is this typical for OmniOS?
Under menu Extensions > Realtime Monitor > should I see some activity?

Edit: It appears to be mostly the interface, as it eventually dropped to 10-15%, a short while after I logged off from from napp-it webgui.
I left napp-it running (logged off) while I was away and then logged back on about 10mins ago, as the ESXi screen grab below.
So for my system, 1x vCPU usage is 10-15% when idle and logged off, and up to 45% when logged on.

1643392169857.png

BBQ quote "If you're looking, you ain't cooking"
 
Last edited:

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
What is it processing? is this typical for OmniOS?
Under menu Extensions > Realtime Monitor > should I see some activity?

If you have enabled acceleration (acc) or monitoring (mon) in napp-it (topmenu right of logout) there are running background tasks. Acc tasks read system informations in the background to improve respondability for some time after last menu actions.
 

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,711
Annoying situation just now. 4x2 raid10 of spinners, with 1 ssd as SLOG device. This is running on latest omnios. I meant to plug in another ssd to do hotplug backups, but pulled the slog device by mistake. Plugged it back in, but the pool is stuck:

NAME STATE READ WRITE CKSUM
jbod DEGRADED 0 0 0
mirror-0 ONLINE 0 0 0
c0t5000C500412EE41Fd0 ONLINE 0 0 0
c0t5000C50041BD3E87d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c0t5000C500426C6F73d0 ONLINE 0 0 0
c0t5000C50055E99CDFd0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c0t5000C50055E9A7A3d0 ONLINE 0 0 0
c0t5000C5005621857Bd0 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c0t5000C50056ED546Fd0 ONLINE 0 0 0
c0t5000C50057575FE3d0 ONLINE 0 0 0
logs
c5t5000CCA04DB0D739d0 REMOVED 0 0 0


from dmesg:

Feb 1 09:19:57 omnios zfs: [ID 961531 kern.warning] WARNING: Pool 'jbod' has encountered an uncorrectable I/O failure and has been suspended; `zpool clear` will be required before the pool can be written to.
root@omnios:~# zpool clear jbod

But that is hanging as well. I have a raid1 of 2 ssds serving a vsphere datastore so a reboot at this point is inconvenient. This seems like a bug, no?
 

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,711
Ugh ugh ugh. No time to experiment - it isn't just that pool - nothing is working and guests are failing, so time to do a hard reboot. Damn!!! Even though it said jbod was stuck, apparently I/O involving the ssd raid1 (serving vsphere) was also hosed.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
If write stalls to a basic vdev (this is what you have as slog), zfs is waiting forever for the io to be finished as otherwise a dataloss of last sync writes happens.
Action: reboot and "replace" slog with same or remove + readd. Maybe a clear is enough then. Last sync writes lost (up to 4GB of last writes).

Only if there are no uncompleted writes, a slog lost simply switches to onpool logging. One of the cases a slog mirror helps.
 
Last edited:

danswartz

2[H]4U
Joined
Feb 25, 2011
Messages
3,711
I've looked high and low and can't find an answer, hopefully I just missed it. I have 2 stacked switches. My 3 esxi hosts each have enet connections to them, but not using LACP, just failover order (e.g. nic teaming?) Is it possible to do this for my OmniOS storage appliance? Looking at napp-it gui, I can't see how, just creating a LAG, which I don't want to do (I've had to re-install occasionally on an esxi host, and it's a drag getting it reconnected to the 2 switches so I'm back on.) This is purely a belt and suspenders exercise - I don't need LACP...
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
napp-it is switching from mini_httpd to Apache 2.4

Up to now the webserver below napp-it is mini_httpd. This is an ultra tiny 50kB single binary webserver. With current operating systems https is no longer working due newer OpenSSL demands. As there is only little work on mini_httpd we decided to move to Apache 24 as this is part of the Solaris and OmniOS extra repository with regular bug and security fixes.

Prior an update to napp-it.dev, install Apache on OmniOS (or rerun the wget installer)
pkg install server/apache-24

A first beta with Apache 24 for OmniOS and the new Solaris 11 CBE is current napp-it 22.dev. After an update Apache should work for http on port 81 and https on port 82. On problems or if you have forgotten to install Apache first, restart Apache manually via "/etc/init.d/napp-it start"

The default Apache config files are under /var/web-gui/data/tools/httpd/apache24/. If you want your own config (update save), use /var/web-gui/_my/tools/apache/httpd.conf as config file.

https://napp-it.org/extensions/amp_en.html

mini_httpd can be started on demand: "/etc/init.d/napp-it mini"

gea
 
Last edited:

taroumaru

Weaksauce
Joined
Dec 22, 2005
Messages
78
napp-it is switching from mini_httpd to Apache 2.4

.....

gea
Your documentation of SOLARIS derivatives (installation & setup) in general & napp-it in particular, is very extensive. I, quite sure others too, have been relying on them for more than a decade. So I hope that you will also create extensive documentation for Apace 2.4 based napp-it, with command examples & expected outputs shown in the documentation.

Some questions:
  1. Can we migrate all napp-it settings from earlier version based on mini_httpd to the newer napp-it based on Apace 2.4?
  2. Can we delete/remove or at least disable mini_httpd when the new napp-it is completely running on Apache 2.4?
    • This is because mini_httpd running in the background would use CPU/RAM resources
    • mini_httpd running in the background could also end up interfering with Apache 2.4/napp-it
Edit: I see that you have the installation covered here- https://napp-it.org/extensions/amp_en.html
 

taroumaru

Weaksauce
Joined
Dec 22, 2005
Messages
78
Hi _Gea

Sorry for the late reply. I didn't want to experiment on the production system that has the ZFS pool which holds all our data. Just finished building a new ZFS pool, where all the data from the old pool will be transferred. Before this transfer takes place, as it's safe to do so, I am currently running various tests.

.....

With up to say a dozen of filesysstems I have or see no problem with this. May be different with hundreds of users and the goal to use a filesysten per user. But This is a layout I would not prefer.
1. Unfortunately OmniOS/SOLARIS, doesn't allow for filesystems to be created within a normal folder, maybe it's a ZFS limitation. So if I have to create all these filesystems, it may be at least a 100, but possibly even more than that!
Code:
/
├───folder1
├───folder2
└───folder3
    ├───folder3a
    └───folder3b
        ├───folder3b-1
        └───folder3b-2
Just to share folder3b-2 as top SMB share, I can't just create it as a ZFS filesystem inside a normal folder 'folder3b'. So 'folder3b' has to be a filesystem & even 'folder3' has to be a filesystem. At this rate I'll end up with 100, or even more filesystems!

As the pool itself is a ZFS filesystem this would be possible if you enable SMB on the root filesystem with only simple folders below. But as I said, this is not "best use case" and you possibly create more problems than this solves. For example you cannot replicate a pool to the top level of another pool and you cannot use or modify different ZFS properties per use case.

Create one or a few filesystem, share them.

pool/
├───Fs1=share 1
-------- folder 1
-------- folder 2

├───Fs2=share 2
-------- folder 1
-------- folder 2

Is indeed the ZFS layout you should use. Keep everything simple, use ZFS as intended.
From a Client view when you connect the server you will see share1 and share2
2. I haven't tested this yet, but I've heard that Windows messes up snapshots/Previous Versions when you use multiple filesystems, especially if they're nested.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
1.
A ZFS filesystem can only exist below another ZFS filesystem but can be mounted at any point (must be an empty folder, default mountpoint is /pool/filesystem). A pool itself is also a ZFS filesystem. This is no limitation, this is the way ZFS works. Usually you create a ZFS filesystem ex folder3 with as many regular folders below as needed like folder3a not the other way.

You only want ZFS filesystems instead regular folders when you want different ZFS properties or dedicated replications.

2.
On Solarish with the kernelbased SMB server snaps and shares are both a strict property of a ZFS filesystem. This is why snaps=previous versions work out of the box without problems. This is different when you use SAMBA instead where shares are not strictly related to filesystems what means that the snaps can be on different places within shares. You must then really care about settings or previous versions fail.
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
22.dev with Apache is a beta, grouping, clustering or remote repication not working!

To downgrade, download 21.06 , 22.01, 22.02 or 22.03 (use mini_httpd)
optionally stop Apache manually via pkill -f bin/httpd
and restart mini_httpd via /etc/init.d/napp-it restart
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
current 20.dev supports grouping and http (port 80) and https (port 443) and grouping of appliances
 

AV_Spyder

n00b
Joined
Apr 19, 2022
Messages
2
Hi all
I’ve recently tried a fresh install of OI + napp-it on an HP Microserver (NL-36) and get the following message (error?) when going to either the “Pools or “ZFS Filesystems” menu:

Can't load '/var/web-gui/data/napp-it/CGI/auto/IO/Tty/Tty.so' for module IO::Tty: ld.so.1: perl: fatal: libgcc_s.so.1: open failed: No such file or directory at /usr/perl5/5.22/lib/i86pc-solaris-64int/DynaLoader.pm line 193. at /var/web-gui/data/napp-it/CGI/IO/Tty.pm line 30. Compilation failed in require at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7. BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/IO/Pty.pm line 7. Compilation failed in require at /var/web-gui/data/napp-it/CGI/Expect.pm line 23. BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/CGI/Expect.pm line 23. Compilation failed in require at /var/web-gui/data/napp-it/zfsos/_lib/illumos/zfslib.pl line 2890. BEGIN failed--compilation aborted at /var/web-gui/data/napp-it/zfsos/_lib/illumos/zfslib.pl line 2890. Compilation failed in require at admin.pl line 419.

I’ve tried fresh installs using various combinations of:
  • OpenIndiana Hipster 21.10, 21.04 and 20.10
  • napp-it 21.06a7 and 18.12w9 (using the downgrade option)

I’ve also tried re-running the online wget installer and also the command "ln -s /usr/gcc/6/lib/libgcc_s.so.1 /usr/lib/libgcc_s.so.1" as detailed on the OpenIndiana page on the napp-it website. Unfortunately still get the same message.

The server was previously running older versions of OI and napp-it from around 2013. I’ve got another HP Microserver running OI 20.10 with napp-it 18.12w7 that is working as expected.

Any ideas how to resolve this as I can’t create a storage pool using napp-it.

Thanks
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
The Tty.so is part of Expect. This error results from a newer unsupported Perl.
Part of napp-it is Tty.so from OmniOS for Perl up to 5.34. It worked for OI as well.

Does the problem remains after a logout/login in napp-it?
What is the output of perl -v

btw
OI is more critical than OmniOS as there are sudden changes in OI as it always use ongoing Illumos.
OmniOS in contrast has a dedicated repository per stable release with no newer features but only security and bugfixes up to next stable what avoids sudden updates and makes it much more suitable for a reliable storage server.
 

AV_Spyder

n00b
Joined
Apr 19, 2022
Messages
2
Does the problem remains after a logout/login in napp-it?
What is the output of perl -v
Thanks _Gea for the quick reply.

Problem still remains after logout/login in napp-it and also after restarting the gui.
Output of perl -v
  • “perl 5, version 22, subversion 4 (v5.22.4)”
Note that this problem remains when downgrading to napp-it v18.12w9 (free)

My other Microserver (which works fine) is also running perl v5.22.4. It is running napp-it v18.12w7 (free)

Based on your comments, I’ll try OmniOS (esp now that it has a dialogue based installer)

Thanks
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
Update
Tty.so problem on OpenIndiana with Perl 5.22 is fixed on current napp-it 21.06, 22.01 and 22.dev
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071

arryo

n00b
Joined
May 23, 2012
Messages
57
hi _Gea,

how do I sync the time on OMNI. Right now I have to manually enter time and zone and it keeps delaying
 

VtLgsr245

n00b
Joined
Dec 27, 2007
Messages
34
Any idea how to get chrony to display an update in the job summary like ntpdate used to do for me Gea?
nappit.jpg
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
configure chrony in /etc/inet/chrony.conf
and start via chronyd -q (set and exit)

in job settings ignore return value or check for a valid return value for "ok"
 

_Gea

2[H]4U
Joined
Dec 5, 2010
Messages
4,071
NVMe Pools can make problems on OS downgrades

https://illumos.topicbox.com/groups/discuss/T4033e3489b51199d/heads-up-zfs-on-nvme-after-14686
https://www.illumos.org/issues/14686

"If you're not using ZFS on NVMe devices, you can ignore this message.

With the integration of #14686, the nvme driver will start to use new
devid types specifically created for NVMe. Updating to #14686 will be
handled gracefully by ZFS, old pools will import correctly and will have
their vdev labels updated with the new devid types automatically.

After updating to #14686, older illumos versions may fail to import the
ZFS pools as they may get confused by the new (unknown) devid types. To
handle this, #14745 has been integrated in illumos on June 24th,
allowing ZFS to import pools on vdevs with unknown or invalid devids.

In order to be able to import your ZFS pools on an older illumos version
before #14686, such as when booting any earlier boot environment from
before #14686, you must use an illumos version that already has #14745
but not yet #14686, resetting the devids to the older types when
importing the pools. After that you can go back even further before
#14745.

The illumos distribution maintainers have been informed about this issue
and should have backported #14745 to any releases they support. Please
consult your distributions release notes for more information."
 
Last edited:
Top