unRAID Server release 4.4-beta2 available


Recommended Posts

Download.

 

Critical bug fix in this release.  If you are experimenting with 4.4-beta1, please upgraded to -beta2 and discard -beta1.

 

Also rewrote disk spinup/spindown (again) - reverting back to method of s/w monitoring of i/o to determine when to spin down a disk.  No longer uses 'hdparm' command to set the spindown delay.  Also added some more choices for spindown delay value in the System Management Utility.

 


unRAID Server 4.4-beta1 Release Notes
=====================================

Changes from 4.4-beta1 to 4.4-beta2
-----------------------------------

Improvement: added more spin-down delay settings (15, 30, 45 minutes, 6, 7, 8, 9 hours) and changed method of disk spin-down/spin-up:
- No longer use disk's internal spin-down timer; instead monitor I/O directly.
- A spun down drive will spin-up on demand when it recieves an I/O command (this is unchanged).
- The 'Spin Up' button in the System Management Ultility now uses a SMART command to spin up the disks.

Bug fix: if a drive took longer than 10 seconds to spin up, then a driver timeout error and subsequent SATA channel reset would appear in the system log.
[fix is to increase scsi_execute() timeout (from 10 to 30 sec) in drivers/ata/libata-scsi.c]

Bug fix: the 'Clear Statistics' button will now also clear the Cache Drive statistics.

Bug fix: fixed crash which could occur if multiple disks fail in the same stripe simultaneously.

Improvement: upgrade smartmontools package to version 5.38.


Changes from 4.3.3 to 4.4-beta1
-------------------------------

New feature: support SMP (multi-core processors).

Improvement: read performance enhancements.

Improvement: update to linux kernel 2.6.26.5.

Bug fix: fixed race condition in network initialization which would sometimes result in DHCP failure to get IP address.



Upgrade Instructions (Please Read Carefully)
============================================

If you are currently running unRAID Server 4.2-beta1 or higher (including 4.2.x 'final'), please copy the following files from the new release to the root of your Flash device:
    bzimage
    bzroot

If you are currently running unRAID server 4.0 or 4.1, please copy the following files from the new release to the root of your Flash device:
    bzimage
    bzroot
    syslinux.cfg
    menu.c32
    memtest

This can be done either by plugging the Flash into your PC or, by copying the files to the 'flash' share on your running server.  The server must then be rebooted.

If you are currently running unRAID Server 3.0-beta1 or higher, please follow these steps to upgrade:

1. Referring to the System Management Utility 'Main' page, make a note of each disks's model/serial number; you will need this information later.

2. Shut down your server, remove the Flash and plug it into your PC.

3. Right-click your Flash device listed under My Computer and select Properties.  Make sure the volume label is set to "UNRAID" (without the quotes) and click OK.  You do NOT need to format the Flash.

4. Copy the files from the new release to the root of your Flash device.

5. Right-click your Flash device listed under My Computer and select Eject.  Remove the Flash, install in your server and power-up.

6. After your server has booted up, the System Management Utility 'Main' page will probably show no devices; this is OK, navigate to the 'Devices' page. Using the model/serial number information gathered in step 1, assign each of your hard drives to the correct disk slot.

7. Go back to the 'Main' page and your devices should appear correctly.  You may now Start the array.


If you are installing this release to a new Flash, please refer to instructions on our website at:

http://www.lime-technology.com/wordpress/?page_id=19

Link to comment
  • Replies 67
  • Created
  • Last Reply

Top Posters In This Topic

All relevant changes made for this release to the wiki.

 

Tom if you get time can you look at adding a temp based feature to spin down. Something like "spin down timer settings per temperature range".

 

Excellent release btw will definitely test this one.

Link to comment

Looks like 4.4b2 broke the temperature readings in unmenu.  Probably due to the Smartmontools update from 5.36 to 5.38 ???

 

Edit... smartctl doesn't work at all.... the c++ library needs to be updated.

 

Edit2... yup, loaded the cxxlibs-6.0.8-i486-4 pkg and all is well.... Tom, you need them for beta 3  :D

 

Link to comment

Edit... smartctl doesn't work at all.... the c++ library needs to be updated.

Edit2... yup, loaded the cxxlibs-6.0.8-i486-4 pkg and all is well.... Tom, you need them for beta 3  :D

 

Oh geeze I was going to say something but didn't want to look (or make anyone else look) silly.. Sowwy.

 

Here's a link to 12.0 CXX Libs

http://packages.slackware.it/package.php?q=12.0/cxxlibs-6.0.8-i486-4

 

Download to your flash, telnet in and run

installpkg cxxlibs-6.0.8-i486-4.tgz

Link to comment

Tom, is there any way the Cache drive portion could be mod'd to allow mounting of a partition on a drive as cache instead of requiring the entire drive?

 

???

 

 

I would love to see this happen. Then I could make a cache drive double duty as a boot (supplementary root) drive and a cache drive.

Link to comment

Tom, is there any way the Cache drive portion could be mod'd to allow mounting of a partition on a drive as cache instead of requiring the entire drive?

 

???

 

Do you mean a separate partition on one of your data drives?  If so, no.

 

But, you can use the 'fdisk' command to manually create multiple partitions on your cache drive and unRAID will use partition 1 (so you have to have a partition 1).  But you could then use other partitions on that drive for non-unRAID purposes.  If you want to do this though, you have to also manually create the ReiserFS file system in partition 1 as well before assigning the drive as the Cache drive to prevent System Management utility from wanting to format it (which will also create a new partition table).  Note this will work only with the Cache drive, not the parity/data drives.  Make sense?

Link to comment

Make sense?

Yes. I figured this was possible as long as it was partitioned and formatted outside of the user interface.

If there was a way to select a specific partition on a drive designated as cache, that would be helpful.

one reason being the outer tracks can be much faster then the inner tracks.

If I plan to use a partition for a supplementary root or vmware images, I want that to be where I designate it.

 

For example.

 

1 boot -

2 swap -

3 root & data

4 cache

 

 

Link to comment

Make sense?

Yes. I figured this was possible as long as it was partitioned and formatted outside of the user interface.

If there was a way to select a specific partition on a drive designated as cache, that would be helpful.

one reason being the outer tracks can be much faster then the inner tracks.

If I plan to use a partition for a supplementary root or vmware images, I want that to be where I designate it.

 

For example.

 

1 boot -

2 swap -

3 root & data

4 cache

 

 

Perhaps you can use the disk partition "label" to identify the "cache" partition on the "cache" drive when more than one partition exists.  That way, it will not have to default to the first partition.  You could use the equivalent of

vol_id /dev/[hs]d[a-z]1 to find the matching volume label.

 

Joe L.

Link to comment

Make sense?

Yes. I figured this was possible as long as it was partitioned and formatted outside of the user interface.

If there was a way to select a specific partition on a drive designated as cache, that would be helpful.

one reason being the outer tracks can be much faster then the inner tracks.

If I plan to use a partition for a supplementary root or vmware images, I want that to be where I designate it.

 

For example.

 

1 boot -

2 swap -

3 root & data

4 cache

 

 

 

The partition numbers do not need to be in ascending order on the disk.  For example, if you are trying to put the cache on the innermost cylinders, then you can do this:

 

1 cache

2 swap

3 root & data

4 boot

 

But create them in this order: 4 2 3 1 (from beginning of disk to end of disk).

Link to comment

Tom,

 

/proc/diskstats shows vastly different numbers for number of blocks read and written than the unRAID main page.

 

I just re-booted,and I see this in /proc/diskstats

root@Tower:/proc# cat diskstats
   7    0 loop0 0 0 0 0 0 0 0 0 0 0 0
   7    1 loop1 0 0 0 0 0 0 0 0 0 0 0
   7    2 loop2 0 0 0 0 0 0 0 0 0 0 0
   7    3 loop3 0 0 0 0 0 0 0 0 0 0 0
   7    4 loop4 0 0 0 0 0 0 0 0 0 0 0
   7    5 loop5 0 0 0 0 0 0 0 0 0 0 0
   7    6 loop6 0 0 0 0 0 0 0 0 0 0 0
   7    7 loop7 0 0 0 0 0 0 0 0 0 0 0
   8    0 sda 297 581 3265 3220 6 2 64 60 0 3260 3280
   8    1 sda1 293 555 3025 3160 6 2 64 60 0 3200 3220
   8   16 sdb 76 612 1689 3140 58 23 648 3010 0 730 6150
   8   17 sdb1 72 586 1449 3090 58 23 648 3010 0 680 6100
   8   32 sdc 221 871 7997 860 57 15 79 1970 0 1420 2830
   8   33 sdc1 220 871 7989 860 57 15 79 1970 0 1420 2830
   8   48 sdd 1399 9540 59712 3880 0 0 0 0 0 2600 3880
   8   49 sdd1 850 7750 41000 2800 0 0 0 0 0 1520 2800
   3    0 hda 524 584 5105 7260 6 2 64 70 0 7300 7330
   3    1 hda1 520 558 4865 7160 6 2 64 70 0 7200 7230
   3   64 hdb 797 584 7289 9150 6 2 64 50 0 9160 9200
   3   65 hdb1 793 558 7049 9080 6 2 64 50 0 9090 9130
  22    0 hdc 955 584 8553 10800 6 2 64 50 0 10760 10840
  22    1 hdc1 951 558 8313 10720 6 2 64 50 0 10680 10760
  22   64 hdd 198 584 2497 2960 6 2 64 60 0 2970 3020
  22   65 hdd1 194 558 2257 2880 6 2 64 60 0 2890 2940
  33    0 hde 260 584 2993 3800 6 2 64 60 0 3800 3860
  33    1 hde1 256 558 2753 3740 6 2 64 60 0 3740 3800
  33   64 hdf 182 584 2369 2770 6 2 64 60 0 2770 2830
  33   65 hdf1 178 558 2129 2740 6 2 64 60 0 2740 2800
  34    0 hdg 447 584 4489 5860 6 2 64 60 0 5870 5920
  34    1 hdg1 443 558 4249 5740 6 2 64 60 0 5750 5800
  34   64 hdh 262 584 3009 3950 6 2 64 60 0 3980 4000
  34   65 hdh1 258 558 2769 3890 6 2 64 60 0 3920 3940
  56    0 hdi 486 593 4873 6470 6 2 64 60 0 6230 6530
  56    1 hdi1 482 567 4633 6410 6 2 64 60 0 6170 6470
  56   64 hdj 109 584 1785 1540 6 2 64 80 0 1600 1620
  56   65 hdj1 105 558 1545 1500 6 2 64 80 0 1560 1580
  57    0 hdk 32 584 1169 230 6 2 64 60 0 270 290
  57    1 hdk1 28 558 929 210 6 2 64 60 0 250 270
   9    1 md1 770 0 6160 0 8 0 64 0 0 0 0
   9    2 md2 928 0 7424 0 8 0 64 0 0 0 0
   9    3 md3 171 0 1368 0 8 0 64 0 0 0 0
   9    4 md4 233 0 1864 0 8 0 64 0 0 0 0
   9    5 md5 155 0 1240 0 8 0 64 0 0 0 0
   9    6 md6 421 0 3368 0 8 0 64 0 0 0 0
   9    7 md7 235 0 1880 0 8 0 64 0 0 0 0
   9    8 md8 468 0 3744 0 8 0 64 0 0 0 0
   9    9 md9 82 0 656 0 8 0 64 0 0 0 0
   9   10 md10 497 0 3976 0 8 0 64 0 0 0 0
   9   11 md11 267 0 2136 0 8 0 64 0 0 0 0
   9   12 md12 5 0 40 0 8 0 64 0 0 0 0

 

And this on the main page of the web-interface:

160vat2.jpg

 

Here is the disk inventory:

[pre]

Oct 1 12:03:13 Tower emhttp: Device inventory:

Oct 1 12:03:13 Tower emhttp: pci-0000:00:1f.1-ide-0:0 (hda) ata-ST3750640A_5QD2AX3G

Oct 1 12:03:13 Tower emhttp: pci-0000:00:1f.1-ide-0:1 (hdb) ata-HDS725050KLAT80_KRVA03ZAG3V5LD

Oct 1 12:03:13 Tower emhttp: pci-0000:00:1f.1-ide-1:0 (hdc) ata-ST3400620A_5QH00QPN

Oct 1 12:03:13 Tower emhttp: pci-0000:00:1f.1-ide-1:1 (hdd) ata-HDS725050KLAT80_KRVA03ZAG4V99D

Oct 1 12:03:13 Tower emhttp: pci-0000:00:1f.2-scsi-0:0:0:0 (sda) ata-ST3750640AS_5QD2ZR29

Oct 1 12:03:13 Tower emhttp: pci-0000:00:1f.2-scsi-1:0:0:0 (sdb) ata-ST31000340AS_9QJ0JPJS

Oct 1 12:03:13 Tower emhttp: pci-0000:02:00.0-ide-0:0 (hde) ata-ST3400620A_5QH00PF4

Oct 1 12:03:13 Tower emhttp: pci-0000:02:00.0-ide-0:1 (hdf) ata-ST3400633A_3PM0LZ3D

Oct 1 12:03:13 Tower emhttp: pci-0000:02:00.0-ide-1:0 (hdg) ata-ST3400633A_3PM0BE0T

Oct 1 12:03:13 Tower emhttp: pci-0000:02:00.0-ide-1:1 (hdh) ata-ST3500641A_3PM147P2

Oct 1 12:03:13 Tower emhttp: pci-0000:02:02.0-ide-0:0 (hdi) ata-MAXTOR_STM3500630A_5QG00FTK

Oct 1 12:03:13 Tower emhttp: pci-0000:02:02.0-ide-0:1 (hdj) ata-Maxtor_6Y250P0_Y63KH45E

Oct 1 12:03:13 Tower emhttp: pci-0000:02:02.0-ide-1:0 (hdk) ata-Maxtor_6Y250P0_Y63KH8FE

[/pre]

Should they match closer?  There is no way they should all show 500+ blocks read.  I've not accessed any files, furthermore, disk12 is empty.

 

I can understand a few blocks read for that disk, to read the partition table and the empty directory, but not 500+.

 

Link to comment

Installed v4.4-beta2 and seems to work fine, but first boot was not completely smooth.  One drive timed out twice (with the exception Emask, Timeout, frozen error sequence), before it had even finished the unRAID startup.  It was not disabled, the resets were apparently successful, just a bunch of errors in the syslog, and no apparent harm.  Drives seemed fine, so I captured the syslog (of course!) and rebooted, and it came up fine with a clean syslog the second time.

 

There is a change in the permissions as to the flash drive, partly for the better, but partly for the worse.  From Windows, I cannot change the attributes of files on the flash or overwrite files, for example, overwriting bzroot.  I can delete the file and then copy to the flash.  On the plus side, all files on the flash drive are automatically executable, show as Hidden and System from Windows.  I tested a data drive, and attributes were changeable, were maintained correctly during a copy, and overwrites worked as normal.  From the Linux console, I could overwrite files on the flash drive, however chmod would indicate successful attribute changes, but nothing actually changed.

 

I have a request for Tom.  Now that the spin down wait times are not being sent to the drives, and these commands logged in the syslog (allowing me to deduce which drive ID's went with which disk number), there is no way to associate which drive Device ID is associated with which disk.  Would it be possible to bring back that import table from earlier versions, only logged on array start?  A sample from the past is below:

md0: import [8,16] (sdb) SAMSUNG HD501LJ  S0MUJ1KP206407       offset: 63 size: 488386552
md1: import [8,0] (sda) ST3500630AS      5QG0D11W offset: 63 size: 488386552
md2: import [8,80] (sdf) SAMSUNG HD501LJ  S0MUJ1KP206409       offset: 63 size: 488386552
md3: import [8,32] (sdc) ST3500630AS      9QG0FF6W offset: 63 size: 488386552
md4: import [8,48] (sdd) ST3250823AS      3ND1KVS9 offset: 63 size: 244198552
md5: import [3,0] (hda) Maxtor 6L300R0 L61HAQZG offset: 63 size: 293057320
md6: import [8,64] (sde) ST3320620AS      3QF0G0C5 offset: 63 size: 312571192
md7: import: no device
md8: import: no device
md9: import: no device
md10: import: no device
md11: import: no device
md12: import: no device
md13: import: no device

Link to comment

Installed v4.4-beta2 and seems to work fine, but first boot was not completely smooth.  One drive timed out twice (with the exception Emask, Timeout, frozen error sequence), before it had even finished the unRAID startup.  It was not disabled, the resets were apparently successful, just a bunch of errors in the syslog, and no apparent harm.  Drives seemed fine, so I captured the syslog (of course!) and rebooted, and it came up fine with a clean syslog the second time.

 

Please post that syslog if you still have it.

 

There is a change in the permissions as to the flash drive, partly for the better, but partly for the worse.  From Windows, I cannot change the attributes of files on the flash or overwrite files, for example, overwriting bzroot.  I can delete the file and then copy to the flash.  On the plus side, all files on the flash drive are automatically executable, show as Hidden and System from Windows.  I tested a data drive, and attributes were changeable, were maintained correctly during a copy, and overwrites worked as normal.  From the Linux console, I could overwrite files on the flash drive, however chmod would indicate successful attribute changes, but nothing actually changed.

 

Do you like this new behavior or should we change it back?

 

I have a request for Tom.  Now that the spin down wait times are not being sent to the drives, and these commands logged in the syslog (allowing me to deduce which drive ID's went with which disk number), there is no way to associate which drive Device ID is associated with which disk.  Would it be possible to bring back that import table from earlier versions, only logged on array start?  A sample from the past is below:

 

Do you mean you want to see the [major,minor] of each disk?

Link to comment

Tom,

 

/proc/diskstats shows vastly different numbers for number of blocks read and written than the unRAID main page.

 

...

 

Actually the Main page is showing fields from /proc/diskstats (well it's showing /sys/block/xxx/stats for each disk, but this is the source of /proc/diskstats).  Unfortunately, it's showing the wrong fields  :(  Fixed in next beta.

Link to comment

Do you mean you want to see the [major,minor] of each disk?

I think he is asking for an easier way, in the syslog, to determine the relationship between the "mdX drive, the physical drive /dev/sdY and the drive model/serial number. 

 

I don't know if the major/minor is as helpful when looking at the logs, but RobJ is far better at helping unRAID users than me. 

I think it is the information shown on the "Devices" page on the web-interface he is requesting.

Link to comment

Installed v4.4-beta2 and seems to work fine, but first boot was not completely smooth.  One drive timed out twice (with the exception Emask, Timeout, frozen error sequence), before it had even finished the unRAID startup.  It was not disabled, the resets were apparently successful, just a bunch of errors in the syslog, and no apparent harm.  Drives seemed fine, so I captured the syslog (of course!) and rebooted, and it came up fine with a clean syslog the second time.

 

Please post that syslog if you still have it.

 

The problem is a little more complicated than I thought.  Although I pride myself on above-average thoroughness, I blew it on my own syslog.  There are 3 'exception Emask with reset' error sequences, not 2, and only the second is a 'frozen, Timeout', the other 2 are 'frozen, Device error', but all 3 appear to be related to SWNCQ, new support for NCQ in the sata_nv driver, now enabled by default in 2.6.26 kernels.  Googling it found others with similar error sequences (here and here and in the discussion here, more discussion here and here), with a possible resolution being pushed into 2.6.27 here.  Syslog attached below.  I'll play with disabling SWNCQ (sata_nv.swncq=0), but I wonder if this is a case where the hard resets are a good thing, that is, if a timing issue fails it then reset and try again, until it gets it right!

 

There is a change in the permissions as to the flash drive, partly for the better, but partly for the worse.  From Windows, I cannot change the attributes of files on the flash or overwrite files, for example, overwriting bzroot.  I can delete the file and then copy to the flash.  On the plus side, all files on the flash drive are automatically executable, show as Hidden and System from Windows.  I tested a data drive, and attributes were changeable, were maintained correctly during a copy, and overwrites worked as normal.  From the Linux console, I could overwrite files on the flash drive, however chmod would indicate successful attribute changes, but nothing actually changed.

 

Do you like this new behavior or should we change it back?

 

Short answer, my vote would be to revert back.

 

Seems too non-standard, does not offer any security advantages, just a bit of convenience.  I like the fact that unRAID does not hide its Linux base, but reduces exposure to it for new and non-technical users, especially Windows users.  For those with Linux experience or want to try more advanced things, they find a standard Linux environment, apart from reasonable limitations, such as it being stripped down for memory residency.

 

Although I can't speak for them, I suspect the experienced Linux users will not like it at all, but of course they can probably figure out how to disable or work around it.  Others (primarily Windows users) perhaps won't have to have the Execute flag explained to them, but this only applies to programs and scripts running from the flash drive, not other drives, which may be confusing.  And in general, if they are doing something that requires the Execute flags to be set, are probably following the instructions of more experienced users, either by forum posts or wiki pages, such as HowTo's, and should be getting the appropriate commands.

 

I have a request for Tom.  Now that the spin down wait times are not being sent to the drives, and these commands logged in the syslog (allowing me to deduce which drive ID's went with which disk number), there is no way to associate which drive Device ID is associated with which disk.  Would it be possible to bring back that import table from earlier versions, only logged on array start?  A sample from the past is below:

 

Do you mean you want to see the [major,minor] of each disk?

 

Joe nailed it.  All I want is to be able to identify for example, sdg as the Parity drive and hdc as Disk 4, without having to ask the user.  The rest I can deduce.  Could the following

md1: running, size: 488386552 blocks

md2: running, size: 488386552 blocks

md3: running, size: 488386552 blocks

md4: running, size: 244198552 blocks

md5: running, size: 244198552 blocks

 

be enhanced to this?  (I don't care if it's sdc or /dev/sdc)

md0: running, sdc, size: 488386552 blocks

md1: running, sda, size: 488386552 blocks

md2: running, sdb, size: 488386552 blocks

md3: running, hdb, size: 488386552 blocks

md4: running, hda, size: 244198552 blocks

md5: running, sdd, size: 244198552 blocks

Link to comment

RobJ,

Thanks for your thoroughness & all the support you provide here - I very much appreciate it!

 

I re-enabled the driver 'import' messages to log into the system log, and will fix the flash permissions - you'll see this in the next beta.

 

As for 'sata_nv.swncq=0' I'd be interested if this fixes those errors.

Link to comment
Improvement: added more spin-down delay settings (15, 30, 45 minutes, 6, 7, 8, 9 hours) and changed method of disk spin-down/spin-up:

 

Thx for these new values.  I've really been looking forward to these.

 

Would it be possible to put a shorter increment in - say 5 mins?  I've been having some issues with xbmc as my media interface (After upgrading to a recent release) of spinning up all my drives and until I get to the bottom of it, it would be great if spindown occurred more quickly than 15 mins. (unless there is a reason it shouldn't spindown sooner).

Link to comment

Improvement: added more spin-down delay settings (15, 30, 45 minutes, 6, 7, 8, 9 hours) and changed method of disk spin-down/spin-up:

 

Thx for these new values.  I've really been looking forward to these.

 

Would it be possible to put a shorter increment in - say 5 mins?  I've been having some issues with xbmc as my media interface (After upgrading to a recent release) of spinning up all my drives and until I get to the bottom of it, it would be great if spindown occurred more quickly than 15 mins. (unless there is a reason it shouldn't spindown sooner).

You probably do not want to spin up/down every 5 minutes.  The reason is that the drives are rated for only a limited number of "spin-up" cycles.  You might be shortening the life of the drive.

 

See the discussion here: https://fcp.surfsite.org/modules/newbb/viewtopic.php?topic_id=37244&viewmode=flat&order=DESC&start=10

 

Joe L.

Link to comment

Ahh...well that clarifies it a lot for me.  I knew there was probably a good reason for it and I definitely do NOT want to shorten the life of my drives. 

 

I just got to get over the fact the current version of xbmc is spinning them up (Don't know why), when my older version didn't (they've have made quite a lot of changes since I last upgraded).  Can't see why they should when all the info is cached locally - but then I don't know a lot about it  :'(.  Thx muchly for the response.  Always appreciated.

Link to comment

I agree with Joe.  In fact, lower than 1 hour to me seems too short.  The difference in power savings has got to be insignificant, but wear and tear on bearings probably is not insignificant.

 

Power saving is not the only reason to spin down.... heat is another major factor. Spun up drives are hot whilst spun down ones are cold. Scale this to the density of unRAID home users and it makes sense to spin down quicker.

 

However its not a great solution. A better one is to cool drives better (not always an option) or spin down based on temperture (which IMO is a critical missing feature). unRAID knows if a drive is cooking and should have the facilty to save that drive.

Link to comment

I've been having some issues with xbmc as my media interface (After upgrading to a recent release) of spinning up all my drives and until I get to the bottom of it, it would be great if spindown occurred more quickly than 15 mins. (unless there is a reason it shouldn't spindown sooner).

 

You should check if you set "Update library on startup" and/ or "Always update library in background" in XBMC. (settings/video settings/library).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.