Welcome to MilkyWay@home

Server Downtime March 28, 2022 (12 hours starting 00:00 UTC)

Message boards : News : Server Downtime March 28, 2022 (12 hours starting 00:00 UTC)
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 10 · 11 · 12 · 13 · 14 · 15 · Next

AuthorMessage
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72672 - Posted: 11 Apr 2022, 2:01:40 UTC - in response to Message 72668.  

It depends on the capacity and the technology being used – HDD vs SSD, etc. Can take up to maybe a day or two.
No disk of any technology takes that long to write to every sector of itself. 3 hours max.
ID: 72672 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 72674 - Posted: 11 Apr 2022, 2:57:15 UTC - in response to Message 72672.  
Last modified: 11 Apr 2022, 2:59:33 UTC

It depends on the capacity and the technology being used – HDD vs SSD, etc. Can take up to maybe a day or two.
No disk of any technology takes that long to write to every sector of itself. 3 hours max.

Took about a full day for a 16TB WD Gold HDD which has a max sequential write speed of about 250MB/s. I don't recall exactly how long it was, but it was definitely more than 3 hours. As conventional HDDs go, it's a decently fast one, so I figured there are likely some HDDs out there that would take a fair bit longer.
ID: 72674 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72676 - Posted: 11 Apr 2022, 3:49:46 UTC - in response to Message 72674.  
Last modified: 11 Apr 2022, 3:50:08 UTC

It depends on the capacity and the technology being used – HDD vs SSD, etc. Can take up to maybe a day or two.
No disk of any technology takes that long to write to every sector of itself. 3 hours max.

Took about a full day for a 16TB WD Gold HDD which has a max sequential write speed of about 250MB/s. I don't recall exactly how long it was, but it was definitely more than 3 hours. As conventional HDDs go, it's a decently fast one, so I figured there are likely some HDDs out there that would take a fair bit longer.
Who would buy a disk that takes 18 hours to write to its full capacity?! Did you get it from China?
ID: 72676 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 72677 - Posted: 11 Apr 2022, 4:04:40 UTC - in response to Message 72676.  
Last modified: 11 Apr 2022, 4:51:47 UTC

I'm not sure offhand where Western Digital makes them. They're "enterprise grade" HDDs made for server RAIDs and similar – fairly high-end as far as conventional HDDs go, not something you'd typically find in a PC or the like. They're made for capacity and reliability for continuous use over several years. Overall system speed is determined by the RAID setup, so typically not limited to individual disk speeds. That's another matter when rebuilding a RAID disk though, if the RAID is in use, and so on.

Anyway, it can potentially take quite a while and there are other factors to consider. It really just depends on the setup and use case scenario.
ID: 72677 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72678 - Posted: 11 Apr 2022, 5:12:57 UTC - in response to Message 72677.  
Last modified: 11 Apr 2022, 5:14:16 UTC

I'm not sure offhand where Western Digital makes them. They're "enterprise grade" HDDs made for server RAIDs and similar – fairly high-end as far as conventional HDDs go, not something you'd typically find in a PC or the like. They're made for capacity and reliability for continuous use over several years. Overall system speed is determined by the RAID setup, so typically not limited to individual disk speeds. That's another matter when rebuilding a RAID disk though, if the RAID is in use, and so on.

Anyway, it can potentially take quite a while and there are other factors to consider. It really just depends on the setup and use case scenario.
A mechanical drive is too slow nowadays for a desktop, to use them in a server is incompetant. I have a budget NVME in this desktop which does 2500MB/sec. 10 times faster than your "enterprise grade" rubbish.
ID: 72678 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 72679 - Posted: 11 Apr 2022, 5:31:49 UTC - in response to Message 72678.  
Last modified: 11 Apr 2022, 5:38:45 UTC

Yeah, my main system drive on my PC is a 4TB SSD Samsung Pro and the secondary system drive an older 1TB. MY 64GB of RAM makes for a decent cache too, or even RAM drive if I feel like it. HDDs are fine for storage drives still where speed is less of a concern. I think HDDs still have a viable place in some use scenarios (for now), but in general it's hard not to recommend SSDs. Lower power usage too, which can mean more power supply efficiency and reliability and so on.
ID: 72679 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72681 - Posted: 11 Apr 2022, 5:40:37 UTC - in response to Message 72679.  
Last modified: 11 Apr 2022, 5:47:37 UTC

Yeah, my main system drive on my PC is a 4TB SSD Samsung Pro and the secondary system drive an older 1TB. MY 64GB of RAM makes for a decent cache too, or even RAM drive if I feel like it. HDDs are fine for storage drives still where speed is less of a concern. I think HDDs still have a viable place in some use scenarios (for now), but in general it's hard not to recommend SSDs. Lower power usage too, which can mean more power supply efficiency and reliability and so on.
The only time I ever use a mechanical drive is for something cheap and slow, like a backup I can run overnight, or for storing huge amounts of data that isn't accessed all at once, like security camera footage or program installation files, or TV/Films. At 10p a GB, SSDs are no longer that expensive, and really should be used for all servers.

As for lower power usage, I find that SSDs use less. For example:
HDD: 3.4W to 9W: https://www.seagate.com/www-content/datasheets/pdfs/skyhawk-3-5-hdd-DS1902-8-1803US-en_US.pdf
SSD: 0.005W to 6.3W: https://www.notebookcheck.net/SK-Hynix-PC601-1TB-HFS001TD9TNG-SSD-Benchmarks.541018.0.html

As for reliability, an SSD wears out at a precisely known point that you can see in the SMART data. A hard disk just decides not to spin up one day.
ID: 72681 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 72682 - Posted: 11 Apr 2022, 6:17:27 UTC - in response to Message 72681.  
Last modified: 11 Apr 2022, 6:19:09 UTC

Yeah, that's what I was meaning, with SSDs using less power so more power efficient on power supplies and by extension making the power supplies a little more reliable in a drive enclosure for a NAS or whatever.

In a RAID you're typically not getting any usable data off of a single disk by itself, but regarding failing HDDs that aren't in a RAID, that data is often recoverable depending how far you want to delve into it and of course they have SMART specifications as well. HDDs are at least capable of recovering data where SSDs generally aren't. But yeah, SSDs have come a long way and aren't likely to fail on you. Still not quite what I would hope for in capacity regarding SLC and MLC, but I think their internal controllers these days can often make up for it with how they do wear leveling and so on. But then too HDDs are now sometimes incorporating some of these technologies with internal controllers and an internal SSD as cache, basically hybrid drives of a sort.

But anyway... We've gotten off on a tangent. I'm not sure what MW@H is using nor what specific challenges they may be facing with it.
ID: 72682 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72683 - Posted: 11 Apr 2022, 7:47:59 UTC - in response to Message 72682.  

Yeah, that's what I was meaning, with SSDs using less power so more power efficient on power supplies and by extension making the power supplies a little more reliable in a drive enclosure for a NAS or whatever.
Sorry I misread what you wrote and thought you meant HDD used less power than SSD.

In a RAID you're typically not getting any usable data off of a single disk by itself, but regarding failing HDDs that aren't in a RAID, that data is often recoverable depending how far you want to delve into it
Yes I've heard of that but it needs many dollars sent to a specialist company.

and of course they have SMART specifications as well.
But they don't know when a mechanical part will fail and crash the head into the platter. An SSD has a very precise wearing out period, apart from the early ones with bugs. OCZ were terrible, but I think they went bust. I sent back 90% of their drives, and 90% of the replacements broke too.

Still not quite what I would hope for in capacity regarding SLC and MLC
Capacity vs. price on SSDs is increasing astronomically, they'll soon overtake HDDs. At the moment the cheapest of each I can get in the UK is £60 per TB SSD and £15 per TB HDD.

but I think their internal controllers these days can often make up for it with how they do wear levelling and so on.
Not sure how that works, but I do know even the cheapest ones have spare capacity you don't see, which is inserted when some of what you're using wears out. I assume after there's no spare left, the drive just shrinks? I'll know soon, one of mine is at "40% remaining life" and I'm thrashing it on virtualbox stuff. From a Google search, it appears they're not that clever, they just switch to read only mode so you can copy the data off them. How hard can it be to just use the unused space on the disk? Why does nobody think when designing things?

But then too HDDs are now sometimes incorporating some of these technologies with internal controllers and an internal SSD as cache, basically hybrid drives of a sort.
I've never bothered with those, I either want economy or massive speed.

But anyway... We've gotten off on a tangent. I'm not sure what MW@H is using nor what specific challenges they may be facing with it.
They're using HDDs. Tom's said to change to SSDs would cost $10,000. I'm sure between us we can provide that.
ID: 72683 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3321
Credit: 520,649,000
RAC: 32,636
Message 72687 - Posted: 11 Apr 2022, 10:10:52 UTC - in response to Message 72668.  

It depends on the capacity and the technology being used – HDD vs SSD, etc. Can take up to maybe a day or two.


My SCSI ones took a LONG time, I HOPE they are not still using that technology level though!!
ID: 72687 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72690 - Posted: 11 Apr 2022, 11:15:55 UTC - in response to Message 72687.  

It depends on the capacity and the technology being used – HDD vs SSD, etc. Can take up to maybe a day or two.
My SCSI ones took a LONG time, I HOPE they are not still using that technology level though!!
I remember SCSI, it was when the year began with 19.
ID: 72690 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Tom Donlon
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist

Send message
Joined: 10 Apr 19
Posts: 408
Credit: 120,203,200
RAC: 0
Message 72694 - Posted: 11 Apr 2022, 13:58:42 UTC - in response to Message 72646.  

And remember people, credit is secondary, what really counts is that we are helping scientists in their work to better understand our Galaxy!


Yes, and thanks for that! But, credit is also a useful indicator of who is able to contribute during weird server issues, which can be helpful to figure out what problems there are at any given time.
ID: 72694 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72696 - Posted: 11 Apr 2022, 14:01:11 UTC - in response to Message 72694.  
Last modified: 11 Apr 2022, 14:01:54 UTC

Yes, and thanks for that! But, credit is also a useful indicator of who is able to contribute during weird server issues, which can be helpful to figure out what problems there are at any given time.
Yes, if my credit drops, I look at why my computers are not doing well.

How are you getting on with the server? Overnight I seemed to get 3 GPUs running most of the time, is everything now fixed? Yesterday I was only getting very small amounts of work.
ID: 72696 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 72699 - Posted: 11 Apr 2022, 15:56:35 UTC - in response to Message 72683.  
Last modified: 11 Apr 2022, 16:10:03 UTC

...
Not sure how that works, but I do know even the cheapest ones have spare capacity you don't see, which is inserted when some of what you're using wears out. I assume after there's no spare left, the drive just shrinks? I'll know soon, one of mine is at "40% remaining life" and I'm thrashing it on virtualbox stuff. From a Google search, it appears they're not that clever, they just switch to read only mode so you can copy the data off them. How hard can it be to just use the unused space on the disk? Why does nobody think when designing things?
...

Yeah, I guess in theory you could increase the over-provisioning capacity and shrink usable partition sizes to only good cells. I would assume they prioritize having capacity and data integrity for as long as they can, and then selling more SSDs when they can't. My own personal general and loose rule of thumb has been to ideally use up to half the available capacity of an SSD if possible and then to upgrade or replace them when using more than three-fourths, putting the older SSDs on laptops or something which see less continuous use, but that's just me.

It seems that by the time I need higher capacities they're on the market, so so far it's worked out for me well enough, excluding some form-factors and interfaces that are harder to find.
ID: 72699 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Kiska

Send message
Joined: 31 Mar 12
Posts: 94
Credit: 151,956,524
RAC: 1,103
Message 72701 - Posted: 11 Apr 2022, 16:22:49 UTC - in response to Message 72678.  
Last modified: 11 Apr 2022, 16:32:44 UTC

I'm not sure offhand where Western Digital makes them. They're "enterprise grade" HDDs made for server RAIDs and similar – fairly high-end as far as conventional HDDs go, not something you'd typically find in a PC or the like. They're made for capacity and reliability for continuous use over several years. Overall system speed is determined by the RAID setup, so typically not limited to individual disk speeds. That's another matter when rebuilding a RAID disk though, if the RAID is in use, and so on.

Anyway, it can potentially take quite a while and there are other factors to consider. It really just depends on the setup and use case scenario.
A mechanical drive is too slow nowadays for a desktop, to use them in a server is incompetant. I have a budget NVME in this desktop which does 2500MB/sec. 10 times faster than your "enterprise grade" rubbish.


And pray tell me when that "budget" drive runs out of SLC cache? That drive I am going to guess is a Cruical P1 class SSD. For 500GB this is split into 5GB of SLC cache and the remaining as QLC. The number you are quoting is peak performance, when the SLC cache runs out it is no faster than a HDD in some cases it is slower.

At least with a HDD the performance is consistent ie if spec'd for 250MB/s read/write it'll do that til the disk fails. With consumer SSDs performance metrics fly out the window after the SLC cache is exhausted, enterprise SSDs are designed to mimic HDD performance characteristics eg consistent read/write behaviour

I've never used RAID before but I was unaware that a disk rebuild takes so long.
It doesn't. I used to rebuild disks in hours, without impacting user performance one bit. But I had the sense to use equipment that was up to the task. Clearly MW was only just coping, and any tiny thing like a disk rebuild is the end of the world. It's nothing special, it's just copying the data from the good disks to the new disk. In fact how does he cope with backups?


I see you have never seen enterprise setups. One of my university clusters of research data that is accessed quite frequently by supercomputers had a multiple disk failure, we didn't lose data. The array was 7PB of (redundant)spinning rust, that took 2 weeks to fully rebuild. I believe the array was using ZFS, using raidz3, 25 drives per group and there was 30 groups.

Since you work in the field, it isn't just "copying" data to the new disk, the RAID controller or its doing it in software is reconstructing data from what is remaining in the array when a new drive is inserted, this process is heavy if done in software

It depends on the capacity and the technology being used – HDD vs SSD, etc. Can take up to maybe a day or two.
My SCSI ones took a LONG time, I HOPE they are not still using that technology level though!!
I remember SCSI, it was when the year began with 19.


I'll have you know that Ultra320 is a 2003 standard. Also Fibre Channel is also a form of SCSI and obviously SAS(Serial attached SCSI) is also SCSI and SAS 4.0 in draft phase
ID: 72701 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72713 - Posted: 12 Apr 2022, 3:04:30 UTC - in response to Message 72699.  
Last modified: 12 Apr 2022, 3:05:17 UTC

Yeah, I guess in theory you could increase the over-provisioning capacity and shrink usable partition sizes to only good cells. I would assume they prioritize having capacity and data integrity for as long as they can, and then selling more SSDs when they can't. My own personal general and loose rule of thumb has been to ideally use up to half the available capacity of an SSD if possible and then to upgrade or replace them when using more than three-fourths, putting the older SSDs on laptops or something which see less continuous use, but that's just me.

It seems that by the time I need higher capacities they're on the market, so so far it's worked out for me well enough, excluding some form-factors and interfaces that are harder to find.
I buy what I have to. If the icon in Windows goes red, it's nearly full (90 or 95%?), I then do a disk cleanup and tidy through things myself. If it's still very full, I upgrade. They work just fine nearly full.

Maybe once you get to the stage of no spare cells, everything is pretty worn out using even-wearing, and they want you to stop using it and grab your data.
ID: 72713 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72714 - Posted: 12 Apr 2022, 3:14:18 UTC - in response to Message 72701.  

A mechanical drive is too slow nowadays for a desktop, to use them in a server is incompetant. I have a budget NVME in this desktop which does 2500MB/sec. 10 times faster than your "enterprise grade" rubbish.
And pray tell me when that "budget" drive runs out of SLC cache? That drive I am going to guess is a Cruical P1 class SSD. For 500GB this is split into 5GB of SLC cache and the remaining as QLC. The number you are quoting is peak performance, when the SLC cache runs out it is no faster than a HDD in some cases it is slower.
Nope, tested it, it can write or read that continuously. Even if what you were saying were true I would think the server has big peak loads which an SSD would handle well, And then of course there's the obvious advantage of almost zero seek time. With a server the disk needs to access multiple parts at once. Moving the heads all over the place is just absurd for 1000 users trying to get different things at the same time.

At least with a HDD the performance is consistent ie if spec'd for 250MB/s read/write it'll do that til the disk fails.
Yeah right, try accessing even two files at once. The heads jump back and forth and your 250 becomes 2. Now consider MW has thousands of users.

It doesn't. I used to rebuild disks in hours, without impacting user performance one bit. But I had the sense to use equipment that was up to the task. Clearly MW was only just coping, and any tiny thing like a disk rebuild is the end of the world. It's nothing special, it's just copying the data from the good disks to the new disk. In fact how does he cope with backups?
I see you have never seen enterprise setups. One of my university clusters of research data that is accessed quite frequently by supercomputers had a multiple disk failure, we didn't lose data. The array was 7PB of (redundant)spinning rust, that took 2 weeks to fully rebuild. I believe the array was using ZFS, using raidz3, 25 drives per group and there was 30 groups.
No idea why you think I've never seen enterprise setups. I've worked in universities and schools with 1000s of users accessing the server. I've used disks and SSDs. But I bought decent stuff that was nowhere near it's maximum capabilities in normal use, so when disks failed, it rebuilt them easily. You did get a disk controller that handles that right? You didn't have the CPU doing all the work?

I'll have you know that Ultra320 is a 2003 standard.
Oooh 2003. Hint: it's now 2022. Ultra 320 is older than my Renault which is worth £150.
ID: 72714 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile mikey
Avatar

Send message
Joined: 8 May 09
Posts: 3321
Credit: 520,649,000
RAC: 32,636
Message 72718 - Posted: 12 Apr 2022, 10:45:21 UTC - in response to Message 72690.  

It depends on the capacity and the technology being used – HDD vs SSD, etc. Can take up to maybe a day or two.
My SCSI ones took a LONG time, I HOPE they are not still using that technology level though!!


I remember SCSI, it was when the year began with 19.


That would be true!
ID: 72718 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 3
Message 72719 - Posted: 12 Apr 2022, 11:00:00 UTC - in response to Message 72718.  
Last modified: 12 Apr 2022, 11:00:24 UTC

It depends on the capacity and the technology being used – HDD vs SSD, etc. Can take up to maybe a day or two.
My SCSI ones took a LONG time, I HOPE they are not still using that technology level though!!


I remember SCSI, it was when the year began with 19.


That would be true!
Oops, I just melted the plug for the GPU power supply. I wondered what that smell was. I guess some MW tasks appeared. I'll stick in a larger fuse....
ID: 72719 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Kiska

Send message
Joined: 31 Mar 12
Posts: 94
Credit: 151,956,524
RAC: 1,103
Message 72728 - Posted: 12 Apr 2022, 18:39:45 UTC - in response to Message 72714.  

A mechanical drive is too slow nowadays for a desktop, to use them in a server is incompetant. I have a budget NVME in this desktop which does 2500MB/sec. 10 times faster than your "enterprise grade" rubbish.
And pray tell me when that "budget" drive runs out of SLC cache? That drive I am going to guess is a Cruical P1 class SSD. For 500GB this is split into 5GB of SLC cache and the remaining as QLC. The number you are quoting is peak performance, when the SLC cache runs out it is no faster than a HDD in some cases it is slower.
Nope, tested it, it can write or read that continuously. Even if what you were saying were true I would think the server has big peak loads which an SSD would handle well, And then of course there's the obvious advantage of almost zero seek time. With a server the disk needs to access multiple parts at once. Moving the heads all over the place is just absurd for 1000 users trying to get different things at the same time.

I see you didn't test long enough then. Here is a Tom's hardware review of the Crucial P1 SSD I mentioned: https://www.tomshardware.com/reviews/crucial-p1-nvme-ssd-qlc,5852-3.html
I'll quote a snippet
Official write specifications are only part of the performance picture. Most SSD makers implement an SLC cache buffer, which is a fast area of SLC-programmed flash that absorbs incoming data. Sustained write speeds can suffer tremendously once the workload spills outside of the SLC cache and into the "native" TLC or QLC flash. We hammer the SSDs with sequential writes for 15 minutes to measure both the size of the SLC buffer and performance after the buffer is saturated.

1TB variant
The Intel 660p is faster than the P1 for the first 20 seconds of this heavy write workload, but after that, the P1 took the lead until the buffer was full. Crucial’s P1 wrote 149GB of data before its write speed degraded from 1.7GB/s down to an average of 106MB/s.

500GB variant
Crucial’s P1 features a rather large SLC write cache. It helps the SSD to absorb about 73GB of data at a rate of 1GB/s before it fills. This is sufficient for most consumer workloads, but after that, performance suffers drastically. We all know when you add more bits to a NAND cell, write performance suffers without an SLC cache. But in the Crucial P1’s case, performance is dreadful. After its SLC cache exhausts, the native direct to QLC write speed is just 60MB/s on average.


At least with a HDD the performance is consistent ie if spec'd for 250MB/s read/write it'll do that til the disk fails.
Yeah right, try accessing even two files at once. The heads jump back and forth and your 250 becomes 2. Now consider MW has thousands of users.


And pray tell me what businesses were using before the avant of SSDs. A single 15k rpm drive can do about 700 TPS on a database that should be sufficient for MySQL for the BOINC server to work with. Also seti@home worked fine off of HDDs and that project was the true definition of shoehorn budget.

It doesn't. I used to rebuild disks in hours, without impacting user performance one bit. But I had the sense to use equipment that was up to the task. Clearly MW was only just coping, and any tiny thing like a disk rebuild is the end of the world. It's nothing special, it's just copying the data from the good disks to the new disk. In fact how does he cope with backups?
I see you have never seen enterprise setups. One of my university clusters of research data that is accessed quite frequently by supercomputers had a multiple disk failure, we didn't lose data. The array was 7PB of (redundant)spinning rust, that took 2 weeks to fully rebuild. I believe the array was using ZFS, using raidz3, 25 drives per group and there was 30 groups.
No idea why you think I've never seen enterprise setups. I've worked in universities and schools with 1000s of users accessing the server. I've used disks and SSDs. But I bought decent stuff that was nowhere near it's maximum capabilities in normal use, so when disks failed, it rebuilt them easily. You did get a disk controller that handles that right? You didn't have the CPU doing all the work?


What do you think ZFS is? For some context ZFS best works when ZFS-to-disk path is present so that ZFS can determine disk health, detect corruption, disk handling etc. And hardware RAID cards present a barrier to this direct access. So you are correct in that we are letting the CPU handle all of that work and works quite well, while 2 weeks is a long time to rebuild 7PB we weren't in any danger of losing data and the other reason is so we can use it as a practical demonstration to current ICT students(we still need to teach and what better way to do that than a real world example using real data).
And we are going to use ZFS to build out our new data centre for SKA data, I believe the build is 500PB of raw HDD storage and something like 440PB of usable space.

*hint hint* ZFS previously stood for zettabyte file system

I'll have you know that Ultra320 is a 2003 standard.
Oooh 2003. Hint: it's now 2022. Ultra 320 is older than my Renault which is worth £150.


Sealed lead acid(SLA) batteries is a 1930s era technology, yet they are still being manufactured for cars, etc. And especially UPS batteries, since I am working as a part time freight handler, I get to pick up said devices(that being UPS's and their batteries). And APC, Eaton plus whoever else makes UPS's still sell them brand new with SLA batteries!
ID: 72728 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 10 · 11 · 12 · 13 · 14 · 15 · Next

Message boards : News : Server Downtime March 28, 2022 (12 hours starting 00:00 UTC)

©2024 Astroinformatics Group