Jump to content

Ultra Low Power 24-Bay Server - Thoughts on Build?


Pauven

Recommended Posts

The HighPoint 2760A is working! 

 

I just created a test array with 3 drives (1.5TB parity, 1.5TB data, 1.0TB data) and started a parity sync.  Speeds started off around 114MB/s, and at 10% they had dropped to about 107MB/s.  Hot plug worked just fine as well. 

 

Obviously not a full scale test, but good enough to push forward with the build.  So far this build is extremely fast, even during boot.

 

Parity Sync power draw was ~70W (lower than my current system idles!).

 

Idle was ~50W with the 3 drives attached and spun down, ~46W without the drives, and ~18W without the 2760A.  I'm hopeful that a driver update might enable some advanced power saving features of the 2760A, but I need help to accomplish this.  See previous posts.

 

Thanks,

Paul

Link to comment
  • Replies 307
  • Created
  • Last Reply

Top Posters In This Topic

Excellent !!

 

Like you, I'm a bit disappointed in the 29W idle consumption of the 2960A, but it's definitely nice to know it works with UnRAID => and you've clearly getting VERY good speeds with the drives.

 

Definitely interested in results when it's a bit more loaded  :)

 

Link to comment

  Found some information on the driver being loaded for the 2760A...

 

  The mvsas driver version 0.8.16, has been used under the following environments WITH THE 2760A:

 

Ubuntu 12.04 - using multiple file systems, including ZFS

 

- after changing order of driver loading to be earlier...

 

Ubuntu kernels 3.2, 3.5, and mainline build 3.6.11, and 3.5.0-21-generic #32

 

- after forcing mvsas in to the initramfs

  to get it to load earlier.

 

- that helped a great deal but the root cause of the

  issues seems to be that the first series of commands

  sent to the controller are timing out.  Subsequent

  commands work fine, and the controller behaves

  properly after that.

 

  It sounds to me like the main thing to look for might be initial commands issued after unRAID booting in your instance.  Since this seems to be a common known issue (at least under Ubuntu) using that driver version, it would be good to just make sure if the problem is also there under Slackware (unRAID) or not.  But also people are using the 2760A with no problems after initial startup under Ubuntu.

 

  I would hope that, nor would I be surprised if, things might actually work a little better with unRAID using this driver than has been seen in the Ubuntu world.

Link to comment

Thank you electron286, that is excellent info.

 

I'm still a bit confused, though.  Is mvsas a generic driver, or is it specific to the 2760A? 

 

The mvsas driver seems to be 'good enough' to run unRAID (in very limited testing, so far), but my main point of concern is that this generic driver is not idling the card.

 

I looked through some of the driver downloads on the HighPoint website, and tried to load in the rr276x.ko driver file I found in the suse distribution, but I got an invalid format error (to be expected).

 

I downloaded the open source linux driver file and tried to compile it manually, but I'm getting an error that the makefile is only compatible through linux kernel 3.0.

 

I'm not very good at this linux compiling stuff.  I can get close, but I don't know how to overcome some of these last hurdles.

 

Maybe someone who's smarter than me and knows how to compile can get a ko file compiled on their unRAID box and share it back my way...  ;D

 

I'm also thinking of installing the card in my Windows box, and see how it idles with a windows driver.

 

 

Link to comment

...nice results Pauven!

 

AFAI can see, mvsas is the generic driver for Marvel chipset based HBAs, see: http://cateee.net/lkddb/web-lkddb/SCSI_MVSAS.html

It should load for a AOC-SASLP-MV8 and other HBAs as well.

 

Installing another driver is different from the process in windoze.

You need the kernel module compiled for the specific version running/distributed with unRAID.

If you don't have that, you need to recompile the driver....you need a slackware build system for that.

 

What electron286 was referring to is the way how the driver gets loaded.

Drivers are kernel modules and reside as files in a specific path on the boot disk, inside the root file-system.

Plain linux kernel has a driver package "hooked up" in a ramdisk (initramfs) which enables it to load modules from that path during boot, before the "real" root file-system has been mounted.

This solves a "hen & egg" problem, where drivers are dependent on each other, but reside on a physical disk. which is out of reach until drives for it got loaded.

The initial ramdisk gets removed after early boot stage and after the physical root file-system has been made available.

 

Sometimes, with some hardware, there is timing problem.

Also there can be a situation where there is more than one driver available. Moving the desired one into the ramdisk will ensure it is loaded early and first.

Having all drivers inside the ramdisk will bloat the kernel image....having a lean kernel image would enable to boot from a small boot partition (like a USB stick) and

load the rest from a different source, even from a root file-system on another disk .

In case of unRAID this is not relevant, since everything needs to fit on the stick in any case.

 

...back to topic, I would keep it as it is and try to research if a driver from another source would actually improve power management.

It should be in the changelog of the driver / driver-source.

Only then it is wort a try....there are people running a build system to create modifications/enhancements around....I am sure they could help you out with creating the module/file for you.

Edit: the driver source needs to be compatible with the kernel version, of course. Most likely sources are downards compatible with major kernel versions (2.4, 2.6, 3.0, 3.5).

        A version for recent Ubuntu release is what I would look into.

Link to comment

Actually, given the more readily available Windows drivers, which also tend to have a bit more intuitive interfaces, it may be a good test to simply do the following:

 

(a) Measure the idle power of you Windows box (perhaps a set time after boot)

(b) Shut down; install the card; boot, and wait, and measure the power consumption after the same time as in (a)

©  Now install the Windows driver -- look carefully for power conservation options and be sure they're set okay; then shut down; reboot; and repeat the measurement (again, after the same set amount of time  :)

 

FWIW, from what I've been able to find about this card, the most common complaint is that it doesn't have an active cooling fan -- which implies heat is a consistent issue; and this does not bode well for a low-power standby state.    Hopefully that's wrong !!

 

Link to comment

Actually, given the more readily available Windows drivers, which also tend to have a bit more intuitive interfaces, it may be a good test to simply do the following:

 

(a) Measure the idle power of you Windows box (perhaps a set time after boot)

(b) Shut down; install the card; boot, and wait, and measure the power consumption after the same time as in (a)

©  Now install the Windows driver -- look carefully for power conservation options and be sure they're set okay; then shut down; reboot; and repeat the measurement (again, after the same set amount of time  :)

 

FWIW, from what I've been able to find about this card, the most common complaint is that it doesn't have an active cooling fan -- which implies heat is a consistent issue; and this does not bode well for a low-power standby state.    Hopefully that's wrong !!

 

Excellent advice, garycase.  Thank You.

 

I've completed testing the power consumption of the 2760A under Windows 7x64.  No drives were attached to the 2760A during testing.  The Wattage figures below are the lowest observed. 

  • 122W - Baseline
  • 149W - 2760A Installed, BIOS 1.1, no drivers
  • 150W - 2760A Installed, BIOS 1.1, 1.0.10 drivers - Shipped version in box
  • 150W - 2760A Installed, BIOS 1.1, 1.2.12 drivers - Current version on website
  • 150W - 2760A Installed, BIOS 1.3, 1.2.12 drivers
  • 123W - 2760A Removed, back to Baseline

So it seems that the 2760A idles at ~28W.  I found no extra power saving features exposed in the HighPoint management console.  Looking at the device properties, Power Power States D0, D1 and D3 are supported.  In the Device Manager, there were also 3 additional devices (HighPoint ... RCM) that appeared related to the 2760, but these only supported D0 and D3.  I was not able to determine what Power State was being observed during my test, though I am assuming D0 (Full Power).  Even though D1 (Light Power Saving) is supported, I didn't see any Windows Power States (S0...S6) being mapped to D1 for this device.

 

Interestingly, the 2760A shows as three separate RAID controller devices in the Device Manager - something I had seen in Linux as well.

 

I am disappointed that the 2760A shipped with BIOS 1.1, as 1.2 was released over a year ago, and is required for drives larger than 2TB, and 1.3 (the latest and greatest) was released last October.  Fortunately, the BIOS upgrade was the easiest I have ever experienced using the HighPoint management console.  I'm glad I did the testing in Windows, otherwise I would have had to upgrade the BIOS from DOS.

 

Has anyone performed any power consumption analysis for the other popular SAS controller cards?

Link to comment

AFAI can see, mvsas is the generic driver for Marvel chipset based HBAs, see: http://cateee.net/lkddb/web-lkddb/SCSI_MVSAS.html

It should load for a AOC-SASLP-MV8 and other HBAs as well.

 

Installing another driver is different from the process in windoze.

...

 

Thank you Ford Prefect for that wonderful information.  I know you said earlier that English is not your native tongue, but it does not show.  Your English is excellent, and your ability to explain technical topics in layman terms is skillful.

 

Based upon my "2760A in Windows" power consumption test, I no longer think different drivers will reduce power consumption.  Since performance was very good, I'm going to continue utilizing the mvsas driver for now.

 

I plan to reach out to HighPoint and see if they care to comment on the power consumption figures.  Most likely, they expect this type of device to be connected to a fully array of drives and operate under load at all times, so idle power consumption may not have been a concern.

 

Link to comment

The SFF-8087 SAS cables I purchased were labeled as 'No Sides' on the website, and at the time I had no idea what that meant.

 

Now that I have the cables, they have a sticker on them that indicates 'All Sidebands Removed'.  I had to look it up, but these sidebands are extra wires used to transfer the SGPIO signals.

 

Now, both the 2760A controller card and the X-Case RM-424 support SGPIO, so this was a feature I was thinking of utilizing.  Now that I've purchased the wrong cables, before I return them I have a simple question:

 

Does SGPIO work with unRAID?

 

Thanks for any help!

Paul

Link to comment

So it seems that the 2760A idles at ~28W.

...

Interestingly, the 2760A shows as three separate RAID controller devices in the Device Manager - something I had seen in Linux as well.

...

I am disappointed that the 2760A shipped with BIOS 1.1, as 1.2 was released over a year ago, and is required for drives larger than 2TB, and 1.3 (the latest and greatest) was released last October. 

 

I'm not really surprised at the power consumption ... as I noted earlier, the one common thing folks note about this card is that it runs warm.

 

Nor am I surprised that it shows as 3 distinct controllers in Device Manager ... remember that the card actually uses 4 88SE9485's (but only 3 are active in the 24-port version).

 

The BIOS version is disappointing, but not uncommon ... many motherboards are shipped the same way (with older BIOS's).  At least it was simple to upgrade it in Windows.    I've bought motherboards before that were incompatible with the CPU I purchase with them because they didn't have a new-enough BIOS to support the CPU.  THAT is a real pain, as you have to install a CPU that's supported;  do the BIOS upgrade; and then replace the CPU.    If you don't happen to have a supported CPU handy, it's REALLY a PITA !!!

 

Link to comment

I'm not really surprised at the power consumption ... as I noted earlier, the one common thing folks note about this card is that it runs warm.

 

Warm might be an understatement.  I've measured 170°F / 77°C on the heatsink while idling in open air (outside of case, without fans).  I expect the underlying chips are even hotter.  Not sure what operating temperature range is spec'd for these chips.

 

The X-Case RM-424 produces very strong airflow, so I'm expecting lower idle temps with the 2760A in the case.  I'll have to do some idle/load temp measurements in the case to determine if heat management is an issue on the 2760A, but I'm anticipating everything will be okay as the manufacturer intended.

Link to comment

When it is build out of three standard chipsets (the fact that the system sees "it " as a set of three "cards" supports this),

I think that it comes natural that it will use a considerable amount of power above that of a single chipset.

 

A single controller, like the AOC-SASLP-MV8 is reported to draw 6-7W, AFAIR...the M1015 based on a single LSI2008 certainly is in the same range.

AFAIR LSI uses another chipset package for cards above 8 ports, reducing power consuption hence..

Link to comment

Well the build currently isn't living up to expectations.  Not only is idle watts higher than hoped, but parity check performance has dropped.

 

After moving all my drives over, I first brought up the server on 5.0 RC6, which I've been running for a while.  I got a parity check speed of 4MB/s... yes, four.

 

I quickly upgraded to RC12a (same version I used in my initial testing), and parity speeds climbed back up into the low 40's.

 

Something's not quite right, but I can't put my finger on it.  The only bottleneck I'm seeing right now is attached to my beer...

 

Heading to the mountains for the holiday weekend.  Out of sight, out of mind.  Everyone have a great Memorial day.

Link to comment

So I've been digging into the performance issues, and haven't been able to identify a root cause or solution yet to the slow parity checks on the 2760A.  I could certainly use some help on where to look or tests to perform.

 

I was initially thinking the bottleneck was the 2760 connecting at x8 instead of x16.  I did piece together the following info regarding how the 2760A connects to the motherboard. 

 

 

                                                                                    < 2.0 x8 > (unused on 2760A, but PCI bridge still shows)

                                                                                    < 2.0 x8 > Marvell 88SE9485

PCIe 2.0 x16 Slot < 2.0 x16 > PLX PEX 8648 PCI Bridge |

                                                                                    < 2.0 x8 > Marvell 88SE9485

                                                                                    < 2.0 x8 > Marvell 88SE9485

 

When I was first examining the output of lspci -vv I saw the x8 connection to the Marvell chips, which is why I thought the card was not connecting correctly.  I finally discovered that the PLX bridge chip is connecting to the motherboard at the full x16, so it is operating as designed.  It was very hard to read the results from lspci, because the 2760A shows up as 8 different devices (one PCI bridge to motherboard, four PCI bridges to the Marvell controller chips, and the three Marvell chips themselves) and it took me a while to figure out what was what.

 

By the way, if you are looking for an easy way to check the PCIe lanes, here's how to do it.

 

1) Run lspci, which will give you a list of PCI devices.  Take note of the first pair of numbers on the left (i.e. 04:00) for a device you want to examine closer.

2) Run lspci -s 04:00 -vv | grep Width  (in this case, replace 04:00 with the device you want to examine).

 

Here's an example:

 

root@Tower:~# lspci -s 06:00 -vv | grep Width
                LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM unknown, Latency L0 <512ns, L1 <64us
                LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

The LnkCap line is the device's Capabilities, while the LnkSta line is the device's current Status.

 

The Speed represents PCIe 1.0, 2.0 or 3.0, where 2.5GT/s is 1.0, 5GT/2 is 2.0, and 8.5GT/s is 3.0

 

The Width is the number of PCIe lanes.

 

If the LnkSta values are lower than the LnkCap values, you are not running your device at the full capabilities.

Link to comment

So I've been digging into the performance issues, and haven't been able to identify a root cause or solution yet to the slow parity checks on the 2760A.

 

It may be a stupid question/suggestion, but you're not running the Simple-Features plugin are you?

 

My parity checks drop to those kinds of speeds if the Simple-Features web interface is open.

Link to comment

It may be a stupid question/suggestion, but you're not running the Simple-Features plugin are you?

 

My parity checks drop to those kinds of speeds if the Simple-Features web interface is open.

 

Not a stupid question at all.  I've never ever installed Simple Features (it's on my list to try someday).  I do run unMENU, but I've tested with it disabled with no variance in speed.  At this point, I am running a stock 'go' file.

Link to comment

40MB/s is VERY disappointing for a parity check.

 

Do you have an old, small capacity drive in that mix?    I noted you have 7 3TB WD Reds ... I'd try an array with JUST those drives and see what the parity check speed shows.  I'd expect it to start in the 120+MB/s range and work its way down (after ~ 8hrs) to the 80-90MB/s range.

 

In any event, testing with all of the same make/model/capacity eliminates the variance caused by other drives that may be slower and may be operating on a different range of the drive (i.e. inner cylinder reads are much slower than outer cylinder reads -- so mixed drive sizes definitely cause appreciably longer parity checks).    This does NOT account for the very slow speeds you've seen ... but at least it will give you a baseline based one a single drive type.

 

FWIW I also am a "no plugin" kind of guy ... I want a rock-solid storage server; nothing else.  The only package I install is UnMenu ... and the only packages I have it install re the CleanPowerDown and APC UPS packages, to auto-shutdown in the event of an extended power outage.    I also use it to look at the SMART data on the drives from time-to-time.

 

Link to comment

I noted you have 7 3TB WD Reds ... I'd try an array with JUST those drives and see what the parity check speed shows.  I'd expect it to start in the 120+MB/s range and work its way down (after ~ 8hrs) to the 80-90MB/s range.

 

I've been thinking of how I could execute this test without putting my data at risk.  I don't have 7 'spare' drives, these are real drives as part of a larger array, and I don't want to throw away my parity to conduct this test.

 

Here's what I'm thinking - pull my current parity drive and put it someplace safe, create a new array definition that only has the Red 3TB drives, and put a spare WD Red 3TB drive in for parity.  Run my parity tests.  After my tests are done, I can revert back to the original configuration.  If a drive fails during my testing, I can reinsert my original parity, revert to the original array definition, and rebuild.

 

I'm thinking I need to back up the array configuration files before I take this path, that way I can simply restore the configuration files and reinsert the parity drive, reboot and I'm right back where I started.  Does this sound safe?

 

 

Another test I'm also thinking of conducting is moving the drives around on the controller.  Currently I have 16 data drives and they are all installed on SAS ports 1-4.  SAS port 5 is unused, and SAS port 6 has just the parity drive.  That means two of the Marvell controllers are fully saturated, while the third controller only has the parity drive on it.

 

I was thinking of moving one drive from each port (1-4) down to ports 5 and 6, making everything distributed more evenly among the Marvell drive controllers.  This is such an easy test, I think I'll go ahead and do it now.

Link to comment

Another test I'm also thinking of conducting is moving the drives around on the controller.  Currently I have 16 data drives and they are all installed on SAS ports 1-4.  SAS port 5 is unused, and SAS port 6 has just the parity drive.  That means two of the Marvell controllers are fully saturated, while the third controller only has the parity drive on it.

 

I was thinking of moving one drive from each port (1-4) down to ports 5 and 6, making everything distributed more evenly among the Marvell drive controllers.  This is such an easy test, I think I'll go ahead and do it now.

 

Wow, this had a dramatic impact!  My parity speeds jumped from ~40MB/s to the high 60's!  I just saw it peak at 75MB/s, almost double what I've seen before the change.

 

                    Before            After

--------------------------------------

Port 1    |    4 drives    |    3 drives

Port 2    |    4 drives    |    3 drives

Port 3    |    4 drives    |    3 drives

Port 4    |    3 drives    |    3 drives

Port 5    |    0 drives    |    3 drives

Port 6    |    1 drives    |    1 drives

 

                    Before            After

---------------------------------------

9485 1    |    8 drives    |    6 drives

9485 2    |    7 drives    |    6 drives

9485 3    |    1 drives    |    4 drives

 

 

There is 4GB/s of bandwidth between the PLX PCI bridge chip and each Marvel 88SE9485 controller chip.  With 8 drives max on each segment, that is 500 MB/s of bandwidth per drive (aka 1 whole PCIe 2.0 lane).  There is no way the PCIe bridge is the bottleneck.

 

Do you think the Marvell 88SE9485 controller chips are being overworked? I need to look up the specs on this chip.

 

Link to comment

Definitely requires CAUTION to be sure you avoid data loss.

 

Your idea r.e. doing a new config with just the WD Reds and using a new parity drive will work fine IF you ...

 

(a)  copy your flash drive contents to a "safe" place on your PC; 

(b)  shut down and redo the config as you've noted;

©  redo your USB flash drive with a new "stock" UnRAID  [You could actually skip this & just boot to it, then do a "New Config" when it gives you a bunch of missing drive errors ... but it's "cleaner" to just start with a nice "virgin" flash drive]

(d)  boot UnRAID and create your new config ... be SURE you assign the correct drives.  I'd assign all the data drives first;  confirm the array recognizes everything okay (Start the array and confirm nothing shows that it requires formatting -- if it does, STOP ... do NOT format it !!).  Then Stop the array, add the parity drive, and let it compute parity (this will take a lot of hours).    When it's done computing parity, then run a parity check to confirm it wrote okay AND to see how it performs.

 

You can then shut down the array;  copy the original flash drive contents back;  reconfigure everything as you had it; and all should be fine.

 

BTW, moving things around the controller ports is an excellent idea ... basically distributing the load differently may in fact make a pretty significant difference !!

 

Link to comment

Another test I'm also thinking of conducting is moving the drives around on the controller.  Currently I have 16 data drives and they are all installed on SAS ports 1-4.  SAS port 5 is unused, and SAS port 6 has just the parity drive.  That means two of the Marvell controllers are fully saturated, while the third controller only has the parity drive on it.

 

I was thinking of moving one drive from each port (1-4) down to ports 5 and 6, making everything distributed more evenly among the Marvell drive controllers.  This is such an easy test, I think I'll go ahead and do it now.

 

Wow, this had a dramatic impact!  My parity speeds jumped from ~40MB/s to the high 60's!  I just saw it peak at 75MB/s, almost double what I've seen before the change.

 

                    Before            After

--------------------------------------

Port 1    |    4 drives    |    3 drives

Port 2    |    4 drives    |    3 drives

Port 3    |    4 drives    |    3 drives

Port 4    |    3 drives    |    3 drives

Port 5    |    0 drives    |    3 drives

Port 6    |    1 drives    |    1 drives

 

                    Before            After

---------------------------------------

9485 1    |    8 drives    |    6 drives

9485 2    |    7 drives    |    6 drives

9485 3    |    1 drives    |    4 drives

 

 

There is 4GB/s of bandwidth between the PLX PCI bridge chip and each Marvel 88SE9485 controller chip.  With 8 drives max on each segment, that is 500 MB/s of bandwidth per drive (aka 1 whole PCIe 2.0 lane).  There is no way the PCIe bridge is the bottleneck.

 

Do you think the Marvell 88SE9485 controller chips are being overworked? I need to look up the specs on this chip.

 

VERY nice difference.    It's hard to believe the 9485's are being "overworked" ... but clearly distributing the load more uniformly had a big impact.    60MB/s with a 75MB/s peak is actually as high as you can expect from a lot of older drives ... so you may be just fine.    It would be interesting to know what those numbers are with only 3TB Reds ... but whether you want to bother with that test or not is up to you.  Just for grins, what is the exact mix of drives you have connected right now (that produced the results you just listed) ??

 

Link to comment

                    Before            After

--------------------------------------

Port 1    |    4 drives    |    3 drives (3x Red 3TB)

Port 2    |    4 drives    |    3 drives (1x Red 3TB, 2x Samsung 1.5TB)

Port 3    |    4 drives    |    3 drives (2x Red 3TB, 1x Samsung 1.5TB)

Port 4    |    3 drives    |    3 drives (3x Samsung 2TB)

Port 5    |    0 drives    |    3 drives (2x Red 3TB, 1x Samsung 1.5TB)

Port 6    |    1 drives    |    1 drives (1x Red 3TB for parity)

 

                    Before            After

---------------------------------------

9485 1    |    8 drives    |    6 drives (4x Red 3TB, 2x Samsung 1.5TB)

9485 2    |    7 drives    |    6 drives (2x Red 3TB, 1x Samsung 1.5TB, 3x Samsung 2TB)

9485 3    |    1 drives    |    4 drives (3x Red 3TB, 1x Samsung 1.5TB)

 

Just for grins, what is the exact mix of drives you have connected right now (that produced the results you just listed) ??

 

I added the "After" drive distribution to the table above.  Looks like I lost track somewhere, I'm up to 9 of the 3TB Reds now.

 

In looking around, it seems that the Marvel 88SE9485 is used on the AOC-SAS2LP-MV8, correct?  It seems to me that this is one of the most common controllers used on unRAID builds, so I'm wondering - has anyone noticed a performance hit just from adding more drives?

Link to comment

I found this unRAID forum post regarding slow parity checks with the AOC-SAS2LP-MV8. 

Using the same controller chip, I think the 2760A may be experiencing identical issues.  I noticed parity speeds went from 4MB/s on RC6 to 40MB/s on RC 12a, results not all that different from what some have been experiencing with the AOC-SAS2LP-MV8.

 

Ironically, I purposely wanted to stay away from the AOC-SAS2LP-MV8 because I had often read of these issues, and now it seems that's exactly what  I bought with the 2760A.  :-\

 

I think the 2760A has allowed me to perform one test that no one else has performed, redistributing the drives across the three controllers on the 2760A.  The fact that dropping the number of drives per controller chip gave a nice performance boost might be an indication of where the problem lies.

 

This also gives me restrained hope for the future.  Assuming the underlying software issue is ever resolved, I think that this could be a very fast controller card.

 

I've been trying to find the mvsas driver revision history/changelog, as this should be a main culprit.  I'd like to see what has been changing in this driver over the various Linux kernel builds unRAID has been slogging through.  Anyone have a link?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...