Jump to content

WeeboTech

Moderators
  • Posts

    9,457
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Brooklyn, NY

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

WeeboTech's Achievements

Veteran

Veteran (13/14)

11

Reputation

  1. RE: unrecover These days I might recommend a used Dell E series laptop. They have eSATA port. Te E series dock also has an eSATA port. Note that it does not support port multipliers, only the first drive will be seen. In addition, an older used AMD based HP Microserver gen 10 might work since it doesn't require caddys. 4 Screws and slide in the drive. This is what I am using these days for network based backups and transitioning/merging smaller hard drives to larger hard drives. While the newer HP microserver gen10 plus is modern and powerful, I personally find it better for a set it and forget it unRAID server due to the delicate nature of front panel and external laptop based PSU. The older AMD based HP microserver gen10 is self contained and is better suited for frequent drive swaps. Even then I acquired a Startech external trayless eSATA device for making things easier.
  2. WeeboTech

    PC in a Desk

    At time I was doing recording it went to a 4track, 8 track, then mixed down to stereo directly into a sound card with sound forge. I haven't played or composed in a very long time.
  3. Many different methods to clear and/or certify a disk. I use unraid itself on a refurbished laptop with badblocks. I do a multipass pattern test. At the end of it I use the preclear script to add the signature. It's an old Dell E series laptop which has a eSATA port along with an ORICO Tool-Free USB 3.0 & eSATA to 2.5" & 3.5 " SATA External Hard Disk Drive Lay-Flat Docking Station HDD. It's a pretty compact temporary solution to getting the job done and away from the server. Granted it is an extra cost. A refurbished Dell can be scored pretty cheaply on eBay. E6510, E4310, E4300, etc, etc. These models have an eSATA port but do not support PMP. They are useful for diagnostics and/or clearing/certifying a disk. I'm sure a more modern refurbished laptop with USB 3.0 would suffice as well.
  4. Just had one start failing. The issue with these drives is death is usually sudden and catastrophic. Parity check good Aug 27th. Array Heathy good at 6:3am I start getting pending sector warnings. What is odd is the numbers keep going up and I wasn't even using the array. as I am trying to rsync the data over to another drive, the numbers just keep climbing.
  5. I have a STARTECH ASMedia ASM1061 on Supermicro X9SCM-F and it worked right away. I also have it on some HP Micro Severs and it worked with unRAID as well.
  6. At one point I had 4gb and 8gb of ram with cache_pressure=0 and I would run out of ram. With my particular case I was rsyncing data with a dated backup using --link-dest= I had to back off to cache_pressure=10. There's another kernel parameter to expand the dentry queue. At that time, with that kernel, I could not find an advantage. What I did find to provide an advantage was an older kernel parameter sysctl vm.highmem_is_dirtyable=1 That changed the caching/flushing behavior of the system, however, I'm not sure how that would aid in cache_pressure vs cached inodes. It's not just about inodes themselves. From what I had read in the past it the dentry queue came into play too.
  7. I cannot remember the outcome in regards to reliability. If I remember correctly I retired it. There seemed to be weird issues with devices randomly dropping. I went with the 2 port Startech card ASMedia chipset.
  8. Indeed. I suspect the third choice ("Auto") may eventually do what I had hoped it now already did -- i.e. use turbo write IF all of the disks were already spinning; normal write otherwise. My earlier post you referred to noted that this had been discussed -- I had hoped that v6.2 now had that implemented (but clearly it does not). But the presence of the third choice would certainly seem to imply that it may be coming :-) In my case, I want to manually control it via cron. It just makes sense for what I do all day long.
  9. I use turbo-write via cron. Turn it on when I am most likely to use it all day long, turn it off at night around bedtime. For some it may make sense to turn it on/off during the mover as well. Using my server all day long to move mp3's and re-tag them, turbo write helps reduce the wait time significantly. For those who may be interested, This is the cron table I install into /etc/cron.d/md_write_method 30 08 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd 30 23 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd # # * * * * * <command to be executed> # | | | | | # | | | | | # | | | | +---- Day of the Week (range: 1-7, 1 standing for Monday) # | | | +------ Month of the Year (range: 1-12) # | | +-------- Day of the Month (range: 1-31) # | +---------- Hour (range: 0-23) # +------------ Minute (range: 0-59) I find this is useful when you are reading and writing to one drive most of the time. Once there are reads and writes to the other drives, things slow down. Therefore it all depends on how a site uses a server.
  10. Appending entries to via >> /var/spool/cron/crontabs/root is not the normal way it is done. details deleted since there is a better dynamix way...
  11. I'm good, I have my .c programs. I only wanted to add some ideas if you were looking to expand further. I think the corz compatibility is/was a great idea and it's something I've been proposing to the other authors. I'll probably borrow some the plugin code to see how I can do the same. I have millions of files with all sorts of file names. I need to do it in .c to avoid quoting issues. I learned that linking to the openssl libraries provides the fastest md5 implementation I could find. I wouldn't be surprised if PHP uses them. Along with a compiled walk through the file system, it's as fast as it can possibly be.
  12. I've actually been waiting for someone to ask about that. I actually was going to build in compatibility to read in the extended attributes to avoid having to rehash all of the existing files that had already been done with bunker, but looked at the number of total downloads of it (at the time just prior to publishing this plugin it was a whole 7 downloads) and decided that it wasn't worth the huge amount of debugging on it. (and because I'm directly dealing with user's data here, my debugging is rather extensive to make sure that there's no way I can inadvertently corrupt data -> to the point that every time I write a hash file the plugin checks to make sure the file name is correct and if its not correct immediately throws up an alert and completely stops the plugin from doing anything and everything until a reboot happens). I figured that if anyone ever brought it up and supplied me with a sample exported file, I'd just create a script to create the .hash files from it. It's probably not worth your effort. I have a tool, it still remains to have a few options added and then compiled for 64bit. I've been wasting so much time with the conversion to ESX6 and unRAID6 with the ESX usb reset problem that I have not finished. This is the help screen so far. root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/hashtools-work# ./hashfattrexport --help Usage: %s [OPTION]... PATTERN [PATTERN]... Export hash extended file attributes on each FILE or DIR recursively PATTTERN is globbed by the shell, Directories are processed recursively Filter/Name selection and interpretation: Filter rules are processed like find -name using fnmatch -n, --name Filter by name (multiples allowed) -f, --filter Filter from file (One filter file only for now) -X, --one-file-system don't cross filesystem boundaries -l --maxdepth <levels> Descend at most <levels> of directories below command line -C --chdir <directory> chdir to this directory before operating -r --relative Attempt to build relative path from provided files/dirs A second -r uses realpath() which resolves to full path -S, --stats Print statistics -P, --progress Print statistic progress every <seconds>. -Z, --report-missing Print filenames missing an extended hash attribute -M, --report-modified Print filenames modified after extended hash attribute -0, --null Terminate filename lines with NULL instead of default \n -R, --report Report status OK,FAILED,MODIFIED,MISSING_XATTR,UPDATED -q, --quiet Quiet/Less output use multiple -q's to make quieter -v, --verbose increment verbosity -h, --help Help display -V, --version Print Version -d, --debug increment debug level it works like this root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/hashtools-work# getfattr -d strlib.* # file: strlib.c user.hash.time="1419424415" user.hash.value="67eef48f1199c68381127baded05f051" # file: strlib.h user.hash.time="1419424415" user.hash.value="214635e26ea28ccb3cb18b9b3d484248" # file: strlib.o user.hash.time="1419424415" user.hash.value="1d258b90f70e787b5560897fcb125e1b" root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/hashtools-work# ./hashfattr -r strlib.* 67eef48f1199c68381127baded05f051 strlib.c 214635e26ea28ccb3cb18b9b3d484248 strlib.h 1d258b90f70e787b5560897fcb125e1b strlib.o root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/hashtools-work# ./hashfattr -r strlib.* | md5sum -c strlib.c: OK strlib.h: OK strlib.o: OK It's like doing a find down a tree | grep | some filter to convert the output of getfattr | md5sum -c What I have to perfect is writing an individual folder.hash per directory and/or doing the whole /mnt/somearchivefolder/hashed_directory_name.hash as previously mentioned.
  13. folder.par2 is where this plugin is going to rock and save the day for some people.
  14. The purpose of using the meta data/extended attribute is to store it with the file so it won't matter if you are on the disk or user share. no database needed. The user does not need to know. At that point exporting it a local direct folder.hash or a remote/alternate location of md5hashname.hash with the embedded path provides what is needed. A centrally/easily managed md5 from the attribute (that the user never need know about) An exported folder.hash in the within the directory or an alternate location with linkage back to the source. The downside of having the original folder.hash within the directory is when you have corruption. Building the md5hashname.hash to a central location and using a symlink safeguards the hash file and allows the symlink to exist in the folder (for corz, or export). both bitrot and bunker have export formats, but not one that is compatible with corz. I can do this fairly easily, however I don't really have the time do it, or it would have been done by now. LOL! Maybe I'll get adventurous this week. I'll still end up using my gdbmsum program since it's side effect lets me log all changes, cache the directories and update the hash in place.
×
×
  • Create New...