Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
_Gea,
I might have imagined it but are you stopping development on the Solaris platform in favor of OI or nexanta?
I have never had or heard of such problems with current OI
Do you have plain OI or napp-it (where you optionally use disk buffering for better performance with many disks,
delete buffer with menu disks - delete disk buffer, needed if you did not remove ZIL with menu disks - remove)
What you can also try: reboot or check zpool status at cli
I am testing napp-it on latest stable OmniOS on an old HP ML360 G5 (Broadcom GB Nics. 12GB RAM) with 16 SAS hard drives for both datastore for esxi 5 and backup repository for Veeam 6.5. This is going to be put in a co-location for Disaster Recovery.
2 x ESXi boxes (with nfs mounts)
1 x nappit + omnios (serving nfs and iscsi)
I installed OmniOS on an 8GB USB.
When I am testing LAN replication with Veeam from another esxi to my test esxi box with the OmniOS NFS datastore, the NFS datastore kept getting disconnected (and thus my replication failed) after a good amount of data transfer and it seems that the NFS box cannot handle the high throughput or something. Some replication succeeded while some not. It's unpredictable or inconsistent.
What can possibly go wrong? On nappit GUI, I click on System > Log but it's not showing any info now. It was showing log messages a couple of days ago, but not anymore.
The Veeam 6.5 box is a VM using vmxnet nic. I'm going to try the E1000 and see if it helps.
This sounds very similar to the issue I have. Except mine is in an ALL-in-One setup and yet the SAN vm will become disconeccted with an All-States-Down error in ESXi. It will run fine for a day or so then start throwing those errors every so often, usually ESXi reconnects but during that time all VMs of course halt.
I have tried using both vmxnet3 nic and e1000 nics on the omnios. Tried several different network tuning tips to solve the issue. Tried out of the box settings of course too. Put both SAN and ESXi IPs in both machines hosts file to eliminate a possible DNS issue.
I have rebuilt the ESXi machine and OmniOS vm. No solution.
With ESXi I have only been using NFS.
From some research, there might be some NFS settings that need to be tweaked in ESXi Advanced Setting. Use the Oracle's zfs-vsphere-nfs best practice settings in the pdf below. I'll try and report back.
Go to page 22 and try some of the recommended settings.
http://www.oracle.com/technetwork/s...mentation/bestprac-zfssa-vsphere5-1940129.pdf
At the bottom of this page, there is also recommended setting for ESXi and NFS. I know it's old, but just use as reference.
http://communities.vmware.com/thread/197850?start=0&tstart=0
Anyone try enabling lz4 zfs compression in OpenIndiana, or have any idea when it will appear in an update?
http://wiki.illumos.org/display/illumos/LZ4+Compression
Anyone try enabling lz4 zfs compression in OpenIndiana, or have any idea when it will appear in an update?
http://wiki.illumos.org/display/illumos/LZ4+Compression
also, vmware has an article about nfs disconnect using Netapp and they suggested setting the NFS maxqueuedepth to 64 from a very high default. But I think this will limit your IO. I haven't tried it though.
http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=2016122
I also read somewhere that if you have one of the supermicro motherboards with the built-in Intel Nic ending with LM, I think, then it's incompatible with esxi 5 but you are using all-in-one.
If I can't find a fix, i'm going to going to test and use iscsi instead.
I am testing napp-it on latest stable OmniOS on an old HP ML360 G5 (Broadcom GB Nics. 12GB RAM) with 16 SAS hard drives for both datastore for esxi 5 and backup repository for Veeam 6.5. This is going to be put in a co-location for Disaster Recovery.
2 x ESXi boxes (with nfs mounts)
1 x nappit + omnios (serving nfs and iscsi)
I installed OmniOS on an 8GB USB.
When I am testing LAN replication with Veeam from another esxi to my test esxi box with the OmniOS NFS datastore, the NFS datastore kept getting disconnected (and thus my replication failed) after a good amount of data transfer and it seems that the NFS box cannot handle the high throughput or something. Some replication succeeded while some not. It's unpredictable or inconsistent.
What can possibly go wrong? On nappit GUI, I click on System > Log but it's not showing any info now. It was showing log messages a couple of days ago, but not anymore.
The Veeam 6.5 box is a VM using vmxnet nic. I'm going to try the E1000 and see if it helps.
my opinion is: some of you guys are over-extending experimental systems. it's fine for testing and fun, but I wouldn't expect those things to work. or expect to find fixes.
I think if you run zfs on bare metal you will still find problems. stacking it into a big system seems like trouble to me. there's a reason for virtualization, and giving yourself more trouble is not it
......
When I am testing LAN replication with Veeam from another esxi to my test esxi box with the OmniOS NFS datastore, the NFS datastore kept getting disconnected (and thus my replication failed) after a good amount of data transfer and it seems that the NFS box cannot handle the high throughput or something. Some replication succeeded while some not. It's unpredictable or inconsistent.
What can possibly go wrong? On nappit GUI, I click on System > Log but it's not showing any info now. It was showing log messages a couple of days ago, but not anymore.
.......
Is zfs on Linux still not recommended? Ideally I would use Ubuntu but when I set up my server zfs on Linux was not recommended. Anyone make the switch from OI to Ubuntu? If it still isn't recommended I guess i'll switch to OmniOS.
I started my server with a mirrored array of 40gb drives and found that was insufficient. I have replaced one at a time with 500gb drives and have auto expand on, but cant seem to get it to expand. Is there a safe and not too painful way to facilitate this in napp-it?
Thanks
I thought about trying this as a possible solution for me, however do you think that it would matter in my case since the Storage device is actually a VM within ESXi? (All-in-One).I did experience the same issue.
I used 1 vkernel interface for both management + storage. OI/Omni had only 1 e1000 nic.
During high load, vsphere host / storage showed "disconnected" state.
My fix:
For esxi host: i create separate management vkernel in a vlan, another vkernel for storage in different vlan. And i set different physical nic teaming policy for each vlan.
For storage vm: 1 vnic for storage, 1 vnic for management, 1 vnic for zfs replication.
Use vmxnet3 if you can. E1000 vnic will drop frames under heavy load. SSH to esxi host, type esxtop, then press 'N', read the column DROP.
HTH.
I thought about trying this as a possible solution for me, however do you think that it would matter in my case since the Storage device is actually a VM within ESXi? (All-in-One).
Gea,
sorry if this isn't the right place, but I've noticed that one in a while my FS will kill all SMB share access... It seems to happen randomly (either under heavy access for extended periods) or just after having the server on for a very long time (months).
I haven't been able to nail down what is the cause. The server itself remains completely responsive as does your console (napp-it) so I just go in and reboot the server and all is well.
Is there somewhere I can check to see if any log might have info as to why this is happening?
Thanks!
If you are in doubt if the virtual ESXi Nic is the problem you can either try e1000 or the faster VMXnet3 or a physical NIC that you can pass-through.
I did experience the same issue.
I used 1 vkernel interface for both management + storage. OI/Omni had only 1 e1000 nic.
During high load, vsphere host / storage showed "disconnected" state.
My fix:
For esxi host: i create separate management vkernel in a vlan, another vkernel for storage in different vlan. And i set different physical nic teaming policy for each vlan.
For storage vm: 1 vnic for storage, 1 vnic for management, 1 vnic for zfs replication.
Use vmxnet3 if you can. E1000 vnic will drop frames under heavy load. SSH to esxi host, type esxtop, then press 'N', read the column DROP.
HTH.
I made the switch from OpenIndiana to ZoL 0.6.1 under Ubuntu recently, with 3 servers in different roles (one home server, one NAS/SAN for video production and one backup server for a business). If you look at the ZoL bug tracker, it's easy to see the implementation is not mature yet, however, I had only minor issues so far, so I don't regret making the switch.Is zfs on Linux still not recommended? Ideally I would use Ubuntu but when I set up my server zfs on Linux was not recommended. Anyone make the switch from OI to Ubuntu? If it still isn't recommended I guess i'll switch to OmniOS.
device r/s w/s kr/s kw/s wait actv svc_t %w %b
sd1 25.0 170.0 253.5 1862.5 0.0 0.5 2.7 0 31
sd2 48.0 316.0 448.5 3346.0 0.0 0.6 1.7 0 36
sd3 68.0 315.0 385.0 3346.0 0.0 0.6 1.5 0 32
sd4 30.0 334.0 188.0 3531.5 0.0 0.6 1.7 0 34
sd6 33.0 282.0 141.0 3531.5 0.0 0.6 2.1 0 41
sd8 28.0 172.0 149.5 1862.5 0.0 0.5 2.7 0 28
sd19 39.0 281.0 364.0 3857.5 0.0 0.6 1.8 0 35
sd20 30.0 278.0 217.5 3857.5 0.0 0.7 2.1 0 36
sd21 27.0 149.0 217.0 1927.0 0.0 0.5 2.7 0 28
sd22 33.0 149.0 159.5 1927.0 0.0 0.5 3.0 0 34
sd23 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
storage-d2-a 4.53T 2.62T 1.91T 57% 1.00x ONLINE -
syspool 18.5G 1.81G 16.7G 9% 1.00x ONLINE -
pool: storage-d2-a
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
storage-d2-a ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0t5000C50035F27643d0 ONLINE 0 0 0
c0t5000C500362C678Fd0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c0t5000C5003631A8A5d0 ONLINE 0 0 0
c0t50014EE20595F221d0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c0t50014EE25AEB2D0Ad0 ONLINE 0 0 0
c0t50014EE2B035B0DDd0 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c0t5000C5003F3560DCd0 ONLINE 0 0 0
c0t5000C500362C601Fd0 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
c0t5000C500362C5E05d0 ONLINE 0 0 0
c0t5000C500362BFF84d0 ONLINE 0 0 0
logs
c0t50015179594D70EEd0 ONLINE 0 0 0
errors: No known data errors
The menus in the latest version of napp-it renders very strangely in Chrome:
https://dl.dropboxusercontent.com/u/43251/nappit.jpg
I have performance problems with Nexenta. After one year problem-free operation, I have now every day problems with heavy writes, which makes zpool slow.
Nexenta is used only as iSCSI target for virtual servers. I checked all ZVOLs used for iscsi targets with Dtrace script zfsio.d (https://github.com/kdavyd/dtrace/blob/master/zfsio.d) which show me disk IO per ZVOLS. But there are no heavy writes on any ZVOL,
But if I check iostat -x 1, there are many write operations per second (many more, that sum of write operations from zfsio dtrace script).
If I check which process is making disk writes, it's "zpool-<poolname>".
Capacity of thet zpool is 57% full.
Do you have any Idea where heavy disk writec come from? How can I find what cause that?
2013 Jul 2 14:35:10 storage-d2-a 1424 ms, 36 wMB 2 rMB 1897 wIops 275 rIops 23+1 dly+thr; dp_wrl 205 MB .. 207 MB; res_max: 206 MB; dp_thr: 207
2013 Jul 2 14:35:15 storage-d2-a 1143 ms, 33 wMB 1 rMB 2966 wIops 97 rIops 54+1 dly+thr; dp_wrl 207 MB .. 245 MB; res_max: 211 MB; dp_thr: 245
2013 Jul 2 14:35:20 storage-d2-a 1404 ms, 32 wMB 3 rMB 2142 wIops 274 rIops 0+0 dly+thr; dp_wrl 223 MB .. 245 MB; res_max: 123 MB; dp_thr: 245
2013 Jul 2 14:35:25 storage-d2-a 1317 ms, 30 wMB 3 rMB 2173 wIops 333 rIops 0+0 dly+thr; dp_wrl 218 MB .. 223 MB; res_max: 133 MB; dp_thr: 223
2013 Jul 2 14:35:30 storage-d2-a 1285 ms, 31 wMB 5 rMB 2129 wIops 326 rIops 21+0 dly+thr; dp_wrl 218 MB .. 233 MB; res_max: 207 MB; dp_thr: 233
2013 Jul 2 14:35:35 storage-d2-a 1283 ms, 28 wMB 2 rMB 1965 wIops 328 rIops 0+0 dly+thr; dp_wrl 233 MB .. 234 MB; res_max: 171 MB; dp_thr: 233
2013 Jul 2 14:35:40 storage-d2-a 1419 ms, 28 wMB 2 rMB 1858 wIops 242 rIops 0+0 dly+thr; dp_wrl 234 MB .. 237 MB; res_max: 175 MB; dp_thr: 237
2013 Jul 2 14:35:45 storage-d2-a 1651 ms, 24 wMB 5 rMB 1502 wIops 450 rIops 0+0 dly+thr; dp_wrl 214 MB .. 237 MB; res_max: 124 MB; dp_thr: 237
2013 Jul 2 14:35:50 storage-d2-a 1575 ms, 32 wMB 2 rMB 1853 wIops 264 rIops 0+0 dly+thr; dp_wrl 213 MB .. 214 MB; res_max: 162 MB; dp_thr: 214
2013 Jul 2 14:35:54 storage-d2-a 1493 ms, 32 wMB 5 rMB 1880 wIops 458 rIops 39+1 dly+thr; dp_wrl 213 MB .. 231 MB; res_max: 214 MB; dp_thr: 231
2013 Jul 2 14:35:59 storage-d2-a 1290 ms, 26 wMB 1 rMB 1703 wIops 113 rIops 0+0 dly+thr; dp_wrl 222 MB .. 231 MB; res_max: 151 MB; dp_thr: 231
2013 Jul 2 14:36:04 storage-d2-a 1316 ms, 26 wMB 2 rMB 2057 wIops 398 rIops 0+0 dly+thr; dp_wrl 220 MB .. 222 MB; res_max: 144 MB; dp_thr: 222
2013 Jul 2 14:36:09 storage-d2-a 1484 ms, 25 wMB 1 rMB 1642 wIops 163 rIops 0+0 dly+thr; dp_wrl 209 MB .. 220 MB; res_max: 167 MB; dp_thr: 219