vsonerud

Members
  • Posts

    35
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

vsonerud's Achievements

Noob

Noob (1/14)

1

Reputation

  1. My unRAID-system is still running version 6.5.3 and I am planning to upgrade "soon" - but just by chance I stumbled upon this thread and since I have 2 Seagate ST8000VN0022 drives in my array it seemed like a good thing to use the seachest* utils to disable EPC and lowCurrentSpinup to be on the safe side. However, when running the Seachest_Configure executable with --EPCfeature disable, an error occurs stating that it is an unknown option , referring to --help for more information - and running with --help reveals that --EPCfeature is not a valid option (anymore) So, does anyone know why the --EPCfeature option has been removed ? Or does anyone have an older version to share ? The version I have tried reports the folllowing version information: SeaChest_Configure Version: 2.3.1-4_1_1 X86_64
  2. Just tried to connect the 18 TB disk to an onboard SATA port - voila, now it worked like a charm to add as a parity drive and parity rebuild is in progress after starting the array 😃
  3. Hi! I have recently bought a new Western Digital UltraStar 18 TB drive and connected it to my unRAID server. The drive is detected and succesfully assigned the /dev/sdi device - and I have succesfully run both short and extended SMART self tests as well as 1 successful preclear cycle using the docker-based binhex preclear plugin. However, when attempting to add it as my new parity drive (instead of my current 8 TB drive) something strange happens. I stop the array - and immediately after I have selected the new 18 TB drive in the drop-down list for the first parity drive slot - the drop down list refreshes / resets - and nothing ends up being selected. Thereafter the new 18 TB drive is gone from the drop down list of available drives to select (the old parity drive is the only one available). If i then select the old parity drive and restarts the array and repeat the process above, the new 18 TB parity drive is available for selection but after selecting it the drop-down list resets and nothing is selected and the drive disappears once again from the drop-down list. I have also tried the Tools | New config route - both with selecting that I want to preserve all previous assignments, but also selecting preserving only 'Data drives and Cache drives' from previous assignments. In both cases the same as above happens - I am able to select the new 18 TB drive but the drop-down resets to nothing after selecting the drive. The following excerpt from the syslog happens when attempting to select the new 18 TB drive as parity: Sep 30 17:27:17 Tower emhttpd: req (25): changeDevice=apply&csrf_token=****************&slotId.0=WDC_WUH721818ALE6L4_2MJ9EKDG Sep 30 17:27:17 Tower emhttpd: shcmd (26821): rmmod md-mod Sep 30 17:27:17 Tower kernel: md: unRAID driver removed Sep 30 17:27:17 Tower emhttpd: shcmd (26822): modprobe md-mod super=/boot/config/super.dat Sep 30 17:27:17 Tower kernel: md: unRAID driver 2.9.3 installed Sep 30 17:27:17 Tower emhttpd: Device inventory: Sep 30 17:27:17 Tower emhttpd: ST8000VN0022-2EL112_ZA1E12T7 (sdj) 512 15628053168 Sep 30 17:27:17 Tower emhttpd: ST8000VN0022-2EL112_ZA15RXGL (sdg) 512 15628053168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK1334PCJWU0JS (sdh) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK2334PEJ7YKST (sdd) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK1334PBKDE35S (sde) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: Hitachi_HDS723020BLA642_MN1220F30E225D (sdb) 512 3907029168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK1381PCJZ0VLS (sdf) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: HGST_HDN724040ALE640_PK1381PCKY343S (sdc) 512 7814037168 Sep 30 17:27:17 Tower emhttpd: WDC_WUH721818ALE6L4_2MJ9EKDG (sdi) 512 35156656128 Sep 30 17:27:17 Tower emhttpd: Sony_Storage_Media_1A08012384785-0:0 (sda) 512 3962880 Sep 30 17:27:17 Tower kernel: mdcmd (1): import 0 sdi 64 17578328012 0 WDC_WUH721818ALE6L4_2MJ9EKDG Sep 30 17:27:17 Tower kernel: md: import disk0: lock_bdev error: -13 Sep 30 17:27:17 Tower kernel: mdcmd (2): import 1 sdf 64 3907018532 0 HGST_HDN724040ALE640_PK1381PCJZ0VLS Sep 30 17:27:17 Tower kernel: md: import disk1: (sdf) HGST_HDN724040ALE640_PK1381PCJZ0VLS size: 3907018532 Sep 30 17:27:17 Tower kernel: md: disk1 new disk The 2 highlighted lines above is for the problematic new 18 TB drive. Anyone who has seen anything like this before, or have any good advice what could be the problem or what I should attempt to do to resolve the matter? The unRAID server is running version 6.5.3 and should work for this drive I think. But I am in the process of attempting to upgrade to 6.10.3 The 18 TB drive is connected to a HBA card which originally was a IBM M1015 card - flashed many years ago with LSI SAS 9211-8i IT-mode firmware (version 15.0.0.0 ) and BIOS. And as mentioned above I have successfully precleared the drive.... It might also be worth mentioning that the server has currently 7 array devices (1 * 8TB drive for parity, 1 * 8TB data drive and 5 * 4 TB data drives) as well as a 2 TB cache drive - and the new unassigned 18 TB drive. The 5 * 4 TB data drives and the 2 TB cache drive is still using ReiserFS - and I was planning to convert these drives to XFS after having added the new 18 TB drive for parity, and then using the old 8TB parity drive for data. ReiserFS apparently has a 16 TB hard limit, but could that affect the attempt to assign a brand new 18 TB unformatted drive as a parity drive ? (I have also configured the server to have XFS as default file system )
  4. @Frank1940 The server is mainly used as a NAS, but has a couple of Docker containers running SABnzbd and Sonarr. No VMs. The following plugins are installed: Community Applications (2020.01.09 - unable to update - requires minimum unRAID 6.9.0) Dynamix System Statistics ( 2018.08.29a unable to updaye - requires minimum unRAID 6.9.0) Fix Common Problems ( 2019.09.08 unable to update - requires minimum unRAID 6.7.0 ) Dynamix Active Streams (up-to-date) Dynamix Cache Directories (up-to-date) Nerd Tools (up-to-date) Preclear Disks (up-to-date) Statistics (up-to-date) The array has 7 drives ( 5 * 4TB + 2 * 8TB) plus one 2 TB cache drive. 4 of the 6 data-drives in the array still use ReiserFS (the plan is to migrate to XFS )
  5. Hi! I am still running unRAID 6.5.3 on my server and have postponed my upgrade to later versions for too long now But what is the recommended upgrade path to 6.10.x ? Any reason not to go directly from unRAID 6.5.3 to the latest 6.10.x release version?
  6. Hi! I am still running unRAID 5.x on my server and have postponed my upgrade to 6.x for too long now But what is the recommended upgrade path to 6.x? Any reason not to go directly from unRAID 5.x to the latest 6.4 release version?
  7. Hello! Is it probable that applications running inside docker containers in unRAID 6.x will be able to utilize AES-NI ?
  8. Hello! I am thinking about migrating my current unRAID 5.x server into using virtualization. Ideally I would like to achieve the following: * Run unRAID on server (obviously ) * Run Linux desktop environment (Linux Mint 16/17 for instance ) * Run Windows 7 VM using VMDirectPath VGA passthrough * Use only this server for both Linux & Windows desktop environment (not be dependent on using another computer for remote control of VM ) I am anxious about the "official" unRAID 6.x virtualization approach with regards to my wishes above. Am I correct in assuming that it will be rather difficult to achieve? If I have understood things correctly the official unRAID 6.x virtualization approach is to run unRAID as the hostOS (XEN dom0 ) ? This would make it difficult to have a complete Linux desktop environment on the same server, yes? Especially since I want to run a Windows7 VM using VGA passthrough. To achieve a full Linux desktop environment on the same server I would then have to run an additional Linux VM also utilizing a second discrete graphics card using VMDirectPath passthrough. Has anyone successfully done that? I mean: Run unRAID on XEN dom0 and Windows7 and some Linux distro in each of their own domU VMs - both using VMDirectpath VGA passthrough ? There are also practical limitations - since my current setup is as follows: * Motherboard: AsRock Z77 extreme4-M * CPU: Pentium G860 * HBAs: 2 * IBM M1015 flashed with IT-firmware The motherboard has 3*PCIE-x16 (2*PCIE 3.0 and 1*PCIE 2.0), but when populated they will run in x8,x8 and x4 mode. In addition there is 1 PCIE-x1. Currently the 2 IBM M1015 HBA cards occupy the 2 PCIE-x16 (PCIE 3.0) (in x8 mode) slots. Hopefully it will be ok to insert a Radeon HD5450 graphics card in the last PCIE-x16 (PCIE 2.0) slot running in x4 mode. (The HD5450 is not a very demanding card I think and merely the fact that Radeon HD5450 PCIE-x1 cards are available kind of indicates that....) ALternatively I guess I will have to place the HD5450 graphics card in the first PCIE-x16 slot and move one of the IBM M1015 cards into the PCIE-x16 slot running in x4 mode ( is that problematic ? ). The current Pentium G860 CPU obviously needs to be replaced with either a core i5/i7 CPU (I am leaning towards a core i7 3770 ) To achieve my wishes above I have been thinking about doing the following instead: * Install Linux Mint 16/17 as hostOS * Install XEN 4.3/4.4 * Run unRAID 5.x as guestOS in a domU VM. * Run Windows7 VM as guestOS in a domU VM using VMDirectPath VGA passthrough. Any reason why this approach might be a bad idea? What about the "future-proofability" of that approach? What about 64-bit unRAID 6.x availability?
  9. I have just finished building my new unRAID server using the ASRock Z77 Extreme4-m motherboard. No problems so far. Only issue for me was that I had to upgrade to the latest motherboard BIOS to be able to use > 1 IBM m1015 controller cards (reflashed with LSI IT-firmware). I am running unRAID 5.0 rc12a on the server.
  10. I am having problems using 2 IBM m1015 cards which I have succesfully flashed with P15 IT-firmware when inserted in my new ASrock Z77 Extreme4-m motherboard. I am only able to get 1 card at a time working in the "top" (designated PCIE1 ) PCIe-x16 slot. If I insert a second card in the other PCIe-x16 (designated PCIE3) slot no connected drives will appear on the second controller (the ones connected to the first controller card appear anyhow ). When having both of these PCIe-x16 slots populated they will be running in 2 * PCIe-x8 mode according to the motherboard manual. I have also tried switching the cards around to make sure that my problems wasn't because of one faulty card. With just one single card inserted, it has to be placed in the topmost PCIe-x16 slot (designated PCIE1 ). It kind of seems that any of the cards when inserted into the middle PCIe-x16 slot (designated PCIE3) does not get recognized. When I flashed the 2 IBM m1015 cards with the P15 IT-firmware I did this in another old PC with non-UEFI motherbord BIOS and everything seemed to go well during the flashing process. BUT: I flashed both of the cards with the OPTION ROM included. Should I not have done this? Should I have flashed only 1 card with the Option ROM included? Or neither? Can I now reflash and "get rid of" the Option ROM simply by using?: sas2flsh -o -f 2118it.bin UPDATE: I am following up on my own post above. After updating the motherboard BIOS to the latest version (1.50) both controller cards presented themselves and the connected drives and all seems to work fine.
  11. I was just wondering the same. Typical, when I have been "planning" to upgrade from 4.7 for ages and today when I am finally going to do it, the download link for the latest release is down try this one... http://download.lime-technology.com/download/ Ah, thank you very much!
  12. I was just wondering the same. Typical, when I have been "planning" to upgrade from 4.7 for ages and today when I am finally going to do it, the download link for the latest release is down
  13. I am planning to upgrade one of my old unRAID servers (which have been running 24/7 for the past ~5 years ) and I am planning to use an IBM m1015 card flashed with IT-firmware on the new unRAID server. The motherboard for the new unRAID server has not been decided yet, but it would be nice if it could be used for m1015 flashing. However, it does not look to good for several of my candidate boards which are Intel chipset B75,H77 or Z77-based. Therefore I am trying to find out if any of my old computers may be used. These use the following motherboards/chipsets: Epox 9npa+ ultra (nforce4 ultra) (old unRAID server) Asus P5e-vm-hdmi (Intel g45) Gigabyte GA-MA785G-D3H (AMD 785G)
  14. I have 2 unRAID servers, the oldest one still running unRAID version 4.4.2. Is there any reason why I shouldn't upgrade to v 4.7? And what can I do to "be prepared for the worst"? What if I have to rollback to the old version, is that unproblematic? Anything in particular I should do in advance before upgrading? The unRAID server is running on an old Epox 9NPA+ Ultra motherboard (Nforce4 ultra chipset) and has been running unRAID flawlessly for almost 3 years.
  15. One of my two unRAID servers is using an Epox 9NPA+Ultra (NForce4 Ultra) with an AMD Athlon 64 X2 3800+ Socket 939 CPU. The server has 8 drives, using 4 drives on the nForce4 Ultra SATA controller, and also has 3 Sil3132 PCI-E x1 controller cards - for a total of max 10 drives. The server has been running for a couple of years or so, and I have transferred HUGE amounts of data to/from the server, including checksum verification, and I have NEVER ever seen a single error.