Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
A newer napp-it is faster than an older due optimized reading of ZFS properties.Interestingly, the interface is significantly quicker now; for example clicking on ZFS Filesystems and the list populating in less than 3sec rather than about 15sec before.
Hi Gea,
Have you used OmniOS as torrent server? Currently I'm using transmission, but the program hangs if the download speed is too fast. Do you recommend any good program that can handle fast download speed?
Thanks
What is it processing? is this typical for OmniOS?
Under menu Extensions > Realtime Monitor > should I see some activity?
Your documentation of SOLARIS derivatives (installation & setup) in general & napp-it in particular, is very extensive. I, quite sure others too, have been relying on them for more than a decade. So I hope that you will also create extensive documentation for Apace 2.4 based napp-it, with command examples & expected outputs shown in the documentation.napp-it is switching from mini_httpd to Apache 2.4
.....
gea
1. Unfortunately OmniOS/SOLARIS, doesn't allow for filesystems to be created within a normal folder, maybe it's a ZFS limitation. So if I have to create all these filesystems, it may be at least a 100, but possibly even more than that!.....
With up to say a dozen of filesysstems I have or see no problem with this. May be different with hundreds of users and the goal to use a filesysten per user. But This is a layout I would not prefer.
/
├───folder1
├───folder2
└───folder3
├───folder3a
└───folder3b
├───folder3b-1
└───folder3b-2
2. I haven't tested this yet, but I've heard that Windows messes up snapshots/Previous Versions when you use multiple filesystems, especially if they're nested.As the pool itself is a ZFS filesystem this would be possible if you enable SMB on the root filesystem with only simple folders below. But as I said, this is not "best use case" and you possibly create more problems than this solves. For example you cannot replicate a pool to the top level of another pool and you cannot use or modify different ZFS properties per use case.
Create one or a few filesystem, share them.
pool/
├───Fs1=share 1
-------- folder 1
-------- folder 2
├───Fs2=share 2
-------- folder 1
-------- folder 2
Is indeed the ZFS layout you should use. Keep everything simple, use ZFS as intended.
From a Client view when you connect the server you will see share1 and share2
Thanks _Gea for the quick reply.Does the problem remains after a logout/login in napp-it?
What is the output of perl -v
when i run ntpdate, i got this:Either via ntpdate (default up to 151038) or chrony (default now) and an "other job",
https://illumos.topicbox.com/groups/omnios-discuss/T5fc9cf2343c39195/fresh-install-vs-upgrade
Sorry,
I have not connected a target directly from a nic.
un_phy_blocksize = 0x1000 (4k) | WDC WD100EMAZ-00
Pool details zdb -C NAPPTANK
MOS Configuration:
version: 5000
name: 'NAPPTANK'
state: 0
txg: 49801354
pool_guid: 9161954606428480179
errata: 0
hostid: 659976810
hostname: 'OMNIOS'
com.delphix:has_per_vdev_zaps
hole_array[0]: 1
vdev_children: 3
vdev_tree:
type: 'root'
id: 0
guid: 9161954606428480179
children[0]:
type: 'raidz'
id: 0
guid: 6478735048776485608
nparity: 1
metaslab_array: 30
metaslab_shift: 37
ashift: 12
asize: 40003271917568
is_log: 0
create_txg: 4
com.delphix:vdev_zap_top: 205
children[0]:
type: 'disk'
id: 0
guid: 11592206082081286968
path: '/dev/dsk/c3t5000CCA267CAA3DCd0s0'
devid: 'id1,sd@n5000cca267caa3dc/a'
phys_path: '/scsi_vhci/disk@g5000cca267caa3dc:a'
whole_disk: 1
DTL: 107769
create_txg: 4
com.delphix:vdev_zap_leaf: 3139
children[1]:
type: 'disk'
id: 1
guid: 15326920071891554926
path: '/dev/dsk/c3t5000CCA267C9A8B6d0s0'
devid: 'id1,sd@n5000cca267c9a8b6/a'
phys_path: '/scsi_vhci/disk@g5000cca267c9a8b6:a'
whole_disk: 1
DTL: 107768
create_txg: 4
com.delphix:vdev_zap_leaf: 16438
children[2]:
type: 'disk'
id: 2
guid: 6240283228962142119
path: '/dev/dsk/c3t5000CCA267CA071Fd0s0'
devid: 'id1,sd@n5000cca267ca071f/a'
phys_path: '/scsi_vhci/disk@g5000cca267ca071f:a'
whole_disk: 1
DTL: 107767
create_txg: 4
com.delphix:vdev_zap_leaf: 614
children[3]:
type: 'disk'
id: 3
guid: 9313324224396922552
path: '/dev/dsk/c3t5000CCA267CA2B4Dd0s0'
devid: 'id1,sd@n5000cca267ca2b4d/a'
phys_path: '/scsi_vhci/disk@g5000cca267ca2b4d:a'
whole_disk: 1
DTL: 107766
create_txg: 4
com.delphix:vdev_zap_leaf: 31369
children[1]:
type: 'hole'
id: 1
guid: 0
whole_disk: 0
metaslab_array: 0
metaslab_shift: 0
ashift: 0
asize: 0
is_log: 0
is_hole: 1
children[2]:
type: 'disk'
id: 2
guid: 10251713868127739504
path: '/dev/dsk/c14t1d0s0'
devid: 'id1,kdev@E144D-Samsung_SSD_983_DCT_960GB_______________-S48CNC0N701868F_____-1/a'
phys_path: '/pci@0,0/pci15ad,7a0@17/pci144d,a801@0/blkdev@1,0:a'
whole_disk: 1
metaslab_array: 170
metaslab_shift: 33
ashift: 9
asize: 960183664640
is_log: 1
DTL: 107765
create_txg: 42685425
com.delphix:vdev_zap_leaf: 108
com.delphix:vdev_zap_top: 129
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data