Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
so obviously, moving the card into another slot gave it another path. I had to go in the host configuration and enable the controller for passthrough again, then reboot the host. Then I had to remove the PCI-passthrough device from the NAS VM and boot it. Once it was up, I shut it down again and added the "new" passthrough device. Only then, it could boot again. Now I imported the pool again and everything works as a charm.
Cheers!
A failover from an active head1 to a standby head2 happens under control of the cluster controlserver/VM. This means that the controlserver initiates a fast remote shutdown of head1 followed by a pool import on head2, a failover of the HA ip and optionally a restore of services like iSCSI or www or a user/group restore from the former active head.
For this you normally do not need an additional kill command (stonith/ shot the other node in the head) of head1. But if head1 hangs for whatever reason and head2 imports the pool it can become corrupted. This is why a second independent kill mechanism of a former active head is implemented. If the whole cluster is virtualised, this can be a VM reset via SSH to ESXi. With a barebone server you can initiate a hard reset via ipmi.
In both cases only the controlserver needs access to ESXi management or the ipmi interface ex via an additional nic or vnic there. If you do not need this additional security, you can skip/fake this step (" echo 1" simulates a successfull stonith ). The heads do not need ESXi or ipmi access.
btw
The multihost ZFS property is already in Illumos. This may be a n additional option to stonith to protect a pool.
but read
https://illumos.topicbox.com/groups...ture-multiple-import-protection-for-ha-setups
A failover from an active head1 to a standby head2 happens under control of the cluster controlserver/VM. This means that the controlserver initiates a fast remote shutdown of head1 followed by a pool import on head2, a failover of the HA ip and optionally a restore of services like iSCSI or www or a user/group restore from the former active head.
For this you normally do not need an additional kill command (stonith/ shot the other node in the head) of head1. But if head1 hangs for whatever reason and head2 imports the pool it can become corrupted. This is why a second independent kill mechanism of a former active head is implemented. If the whole cluster is virtualised, this can be a VM reset via SSH to ESXi. With a barebone server you can initiate a hard reset via ipmi.
In both cases only the controlserver needs access to ESXi management or the ipmi interface ex via an additional nic or vnic there. If you do not need this additional security, you can skip/fake this step (" echo 1" simulates a successfull stonith ). The heads do not need ESXi or ipmi access.
btw
The multihost ZFS property is already in Illumos. This may be a n additional option to stonith to protect a pool.
but read
https://illumos.topicbox.com/groups...ture-multiple-import-protection-for-ha-setups
I'm getting this error every 15 minutes, when playing movies or music the connection drops from the OI server (SMB share called 'storage'), so am guessing it may be related. Otherwise connectivity is fine most of the time without any authentication issues.
Apr 9 09:46:50 openindiana smbsrv: [ID 138215 kern.notice] NOTICE: smbd[NT Authority\Anonymous]: storage access denied: IPC only
Apr 9 09:46:50 openindiana last message repeated 7 times
ServerName ShareName UserName Credential Dialect NumOpens
---------- --------- -------- ---------- ------- --------
SAN1 IPC$ LPT1\winadmin SAN1\omniadmin 3.0.2 0
SAN1 media LPT1\winadmin SAN1\omniadmin 3.0.2 2
Tty.c: loadable library and perl binaries are mismatched (got handshake key 10c80080, needed 10f80080)