MortenSchmidt

Members
  • Posts

    309
  • Joined

  • Last visited

  • Days Won

    1

MortenSchmidt last won the day on February 26 2017

MortenSchmidt had the most liked content!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MortenSchmidt's Achievements

Contributor

Contributor (5/14)

4

Reputation

  1. I had the same problem. Guy left a massage on the dockerhub that he had to pull the 2.2.0 release. Not sure he is aware it breaks the :latest tag. A message her would have been nice.
  2. In case anyone is wondering.. I got a longwinded errormessage concluding with "sqlite3.OperationalError: database or disk is full " and it turned out to be the docker image running out of space. 2GB free was apparently not enough. After increasing the max docker image size to 50GB it was able to complete the db upgrade. Edit: Oh, and after upgrading you have to stop, delete (or keep as backup) the V1 database and start, and don't put this off as it does not automatically switch over to the v2 database and will need to sync up the v2 database from the time the upgrade process started (mine took more than 24 hours and I was more than 8000 blocks behind when I switched over to the V2 database)
  3. No worries. My problem was a corruption in the wallet files. The problem first occurred when the SSD had run out of space (while doing a db upgrade attempt). Deleting the following and starting up helped: blockchain_wallet_v2_mainnet_xxxxxxxxxx.sqlite-shm blockchain_wallet_v2_mainnet_xxxxxxxxxx.sqlite-wal I left the main wallet file (.sqlite) and after that the wallet quickly synced up and "chia wallet show" returns the expected output. Now.. to figure out how much space is needed to do the db upgrade, I believe I had around 84GB free before starting the process and yet it just failed on me again.
  4. I am having a bit of wallet trouble. Running chia wallet show returns "No keys loaded. Run 'chia keys generate' or import a key". I am running 0.7.0. Chia keys show does show keys are loaded (a Master, Farmer and Pool publick keys are displayed along with a First wallet address). I am earning payouts on XCHpool as well according to the xchpool explorer tool. Help?
  5. (I think??) I'm running the test stream (ghcr.io/guydavis/machinaris-hddcoin:test) and running "docker exec -it machinaris-hddcoin hddcoin version" returns "1.2.10.dev121". What I did was simply add the :test to the repo in unraid "update Continer" dialog and hit apply, it looked to me like it pulled the new image and while I see you have more elaborate instructions in the wiki, please help me understand if and why all that is needed and whether running those commands will cause all of my running dockers to stop, wipe and re-pull? I've used docker commands a bit but never encountered the docker-compose command nor a need for it. PS. Also. running "docker exec -it machinaris-hddcoin hodl -h" (or without the -h) returns: OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "hodl": executable file not found in $PATH: unknown I do see your note in the changelog for :test stream about v6.9 updating to v1.2.11 but according to hddcoin github, v2.0 is needed for hodl.
  6. Check if database drive ran out of space? You might have a different issue but I had that happen, I don't recall what errors I got but the only way out was a database resync. I saw some say you could export it in sqlite as text, remove first and last lines and reimprt, it's just surprisingly difficult to do with a 30+GB file when you're not an expert on sed, and I guess there's no guarantee it would work even if you could do that. That would be welcomed, but also checking whether the drive that holds the database is running out pf space would be beneficial. Yes, unraid does have low space warnings but it's not very granular, and it'd be nice to have within machinaris. On another note, have you tested HDDcoin v2.0 at all? I'm sort of interested in their HODL program which requires the new version.
  7. Quick note to anyone having the problem the farmer and wallet not starting up after updating the docker which currently runs "1.2.1-dev0" : Try forwarding port 8555 in your router, after I did this everything starts normally again. Chia v1.2.0 release notes says something about an RPC system being added, that uses this port. EDIT: Another quick note : To pool, you need to generate new plots with the -c switch, read tjb_altf4's guide thoroughly or check out the official documentation, it is not enough to upgrade to 1.2 and create new plots. Unless plots are created with -c switch they will be legacy solo-farming plots. But that said, I am currently stuck trying to join a pool, have followed tjb_altf4's guide above to successfully create the nft (well, it shows up with plotnft show, and I now have two wallets so think that has worked), but when I try to join a pool with "chia plotnft join -i 2 -u https://europe.ecochia.io" I get the somewhat puzzeling error message "Incorrect version: 1, should be 1". Anyone know what's up with that? EDIT: Must have been a problem specific to ecochia, No problem with pool.xchpool.org
  8. For the sake of people searching the forum, the card featured in that video is the Lenovo FRU 03X3834. TheArtofServer guy says it is a newer card than the IBM FRU 46M0997 and that the cards he received did not have any of the firmware bugs the other cards used to come with. I had a lot of trouble digging up any mentions of this card, so thank you for linking this video.
  9. Thank you all who have contributed to this mamooth information dump. I have tried to follow the instructions in the Wiki to convert my recently acquired IBM M1015 card to LSI SAS9211-8i but the established process failed with a "No LSI SAS adapters found!" already in step 1. Maybe the information on how to overcome this is already in one of the prior 67 pages of the thread but I did not sit down to read through it all.. I did find the a workable solution here: https://www.truenas.com/community/threads/ibm-serveraid-m1015-and-no-lsi-sas-adapters-found.27445/ I have used this successfully and have taken the liberty of updating the unraid Wiki with this information hoping it might help someone, this is what I have added: Note on converting newer IBM 1015 cards to plain SAS2008 (LSI SAS9211-8i) : If you encounter the "No LSI SAS adapters found!" in step 1 and when launching the LSI SAS2FLSH tool (either DOS or EFI versions) manually, it may be because newer versions of the IBM M1015 firmware prevent the card being recognized by the LSI tool. In this case you will need to: - Obtain a copy of "sbrempty.bin" for example from https://www.mediafire.com/folder/5ix2j4jd9n3fi77,x491f4v3ns5i40p,1vcq9f93os76u3o,yc0fsify6eajly0,xkchwsha0yopqmz/shared - Manually read out the SAS address from the sticker on the back side of the card, as you aren't able to read it out with the sas2flsh.exe tool. It has the format "500605B x-xxxx-xxxx", ignore spaces and dashes and note down the sas address in format "500605Bxxxxxxxxx" - Still read all the instructions and precautions in the guide (have only one controller card in the machine, preferably have the machine on UPS power, etc.) - Execute "MEGAREC -writesbr 0 sbrempty.bin" (This wipes out the vendor ID, after this command SAS2FLSH can see the card but refuses to read out the sas address or erase the card) - Execute "MEGAREC -cleanflash 0" (This erases the card incl sas address) - Reboot the machine. - From here you can folow the guide in the P20 package after step 3 - Run the "5ITP20.BAT" batch file in 5_LSI_P20\ folder, then 6.bat in the root folder that you modified with your sas address beforehand. - For ease of reference and because not much is left of the guide at this point, the actual commands remaining are "SAS2FLSH -o -f 2118it.bin" to flash the P20 IT mode firmware, and "SAS2FLSH -o -sasadd 500605Bxxxxxxxxx" to set the sas address.
  10. The sale was April 20-21 only, as noted in the headline. It was in their email promo (which I usually never read).
  11. Green (model WDC_WD50EZRX). But fail to see how that matters. Most of those colors are the same product with different settings, label, warranty terms and price. The beautiful art of product segmentation.
  12. http://www.bestbuy.ca/en-CA/product/wd-wd-my-book-5tb-3-5-external-hard-drive-wdbfjk0050hbk-nesn-wdbfjk0050hbk-nesn/10384896.aspx I've shelled a 5TB Elements last summer, works fine so far and had warranty coverage on the internal drive's serial when I looked it up. This is the my book, but assume it is the same drive inside. Cheap harddrives have been hard to come by lately up here. If you have any better deals, please share.
  13. I too get this, probably happened 3 or 4 times in total. Today happened on 6.1.9 while building a docker image I'm (trying to) develop. In my case, syslogd is the sender and I noticed my tmpfs (/var/log) is full. Next time you guys get this, check df -h Look for: Filesystem Size Used Avail Use% Mounted on tmpfs 128M 128M 0 100% /var/log In my case /var/log/docker.log.1 was about 127MB in size (of mostly jibberish). Last time this happened docker didn't like it a lot either - already running dockers worked fine but I was unable to start/stop new ones (docker deamon seems to crash - impossible to check syslog since that stops working too). Any good ideas how to prevent docker logs from balooning like they seem to do for me?
  14. I kept looking at this a bit longer - the other way to get rid of the false warnings is to change from monitoring raw to normalized values. This way one could still monitor field 187's normalized value. I also looked closer at the smartctl-x report which breaks out which fields are relevant to pre-failiure prediction and which are not. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 5 Reallocated_Sector_Ct -O--CK 100 100 000 - 0 9 Power_On_Hours -O--CK 000 000 000 - 909937 (110 2 0) 12 Power_Cycle_Count -O--CK 100 100 000 - 116 170 Unknown_Attribute PO--CK 100 100 010 - 0 171 Unknown_Attribute -O--CK 100 100 000 - 0 172 Unknown_Attribute -O--CK 100 100 000 - 0 174 Unknown_Attribute -O--CK 100 100 000 - 27 184 End-to-End_Error PO--CK 100 100 090 - 0 187 Reported_Uncorrect POSR-- 118 118 050 - 199827011 192 Power-Off_Retract_Count -O--CK 100 100 000 - 27 225 Unknown_SSD_Attribute -O--CK 100 100 000 - 327259 226 Unknown_SSD_Attribute -O--CK 100 100 000 - 65535 227 Unknown_SSD_Attribute -O--CK 100 100 000 - 30 228 Power-off_Retract_Count -O--CK 100 100 000 - 65535 232 Available_Reservd_Space PO--CK 100 100 010 - 0 233 Media_Wearout_Indicator -O--CK 100 100 000 - 0 241 Total_LBAs_Written -O--CK 100 100 000 - 327259 242 Total_LBAs_Read -O--CK 100 100 000 - 146395 249 Unknown_Attribute PO--C- 100 100 000 - 11051 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning I decided to add fields 170, 184 and 232 since these are classified as "prefailure warning" fields, and also field 232 because of its title. (But not 249 since that already has a high raw value which like 187 is not 'auto-kept', so presumably resets with power-on like 187 does). Interestingly, the only field that UnRaid monitors per default, that is a classified as a prefailure warning field for this SSD is 187 (which it monitors incorrectly in raw mode). Not sure if monitoring all those fields including 187 in normalized mode or all of them excl 187 in raw mode is better. I'm leaning toward the latter, thinking that will give the earliest warning possible, but on the other hand since it is just a cache device maybe we don't need to be concerned with the raw values. In case anyone at limetech sees this, the obvious improvement suggestion here is to offer the possibility to monitor some fields in raw mode and others in normalized mode. Or implement the extended attributes so field 187 can be monitored in raw mode correctly.