This message made me think... Why couldn't it be possible (were the powers at the helm willing to enduldge such a mess) to allow more flexible configurations of mixed hardware types? Why can't differing drive capacities work or raiding over different physical bus types (like a PCI drive in RAID with an iPod shuffle for a really extreme example, or like a SATA internal with a FireWire external for a more common practical use example). The truth is, nomatter how slow the slowest drive is, if done correctly the way I envision it, since the fastest drive can't go any faster than it can, adding a slower drive to absorb a disproportionate, smaller share of the work nomatter what would reduce the time to read or write as compared to if the faster drive were off on its own handling the entire read or write. It is possible to missmatch drives over different buses or different capacities--all other factors being equivalent with my idea, although you would be always struggling with lost drive capacity on drives that are on relatively slower bus connections as a trade-off to keep drives in synchronous write simplicity without some inordinately new complex asynchronous RAID scheme which allows different drives to write for longer periods of time in order to do their bit-equivalent share. That asynchronous write time same write-length idea would be lame because it would penalize overall per file transaction times as compared to synchronous--although it would allow to keep the option for redundant bit parity checking and drive recovery were all drives the same size.
If you change the rules and implementat a new kind of software RAID category (in theory) which does not work on an "each drive gets one bit per clock" kind of equal distribution mentality, but rather "virtualizes" the drives somewhat by not require writing equal numbers of bits to all drives--yet in a synchronized fashion---you could use a fixed performance or capacity based ratio to distribute bits or groups of bits on a disporportionate basis between drives (disproportionate only in terms of how many bits each drive gets to hold). While you would lose your ability to restore a lost volume automatically like some RAIDs implement using parity bit checks (because parity checking in this scenario would not be possible anymore as currently implemented) you would gain the ability to implement a RAID that's more flexible in terms of not perfectly matched hardware. Creating and managing a mismatched RAID theoretically could be done for either unequivalent bus drive throughput speeds or unequivalent drive storage capacities by means of fixed ratios (but not both kinds of differences at once using my stratedgy, the ratios would interplay and it wouldn't work.) Using a sort of qualitative analysis at the point of establishing the RAID the controlling software would analyze either the ratio of capacity difference between drives as compared to the largest drive deployed, or the ratio of average actual throughput to each drive as compared to the fastest drive found and thus create a new, simple method of structuring a RAID not based on equal division of bits, but a simplistic internal reference ratio which remembers the relative ratio of either the capacity differentials or connection throughput differences to each drive. Files are under this system stored in small packets per time frame where the packet size is determined by the write-time of the slowest drive connection to perform a one byte write operation and how much time that took. All other drives will use the ratio scheme to determine the relative number of bits they can write per packet given the same time frame. Thus faster drives will be kept happy because per slowest bus clock cycle, they are writing as much information as they are rated capable of at the time the ratios are established. Read ratios will for practicle reasons have to be assumed to be similar to write ratios. If not, an asynchronous option would be deployed for reading only which allows connections where read and write times are disproportionate to that of other connections to catch-up. This somewhat simplistic approach would not create much extra overhead really.
This however leads to a new problem where the different speeds are allowed. To use a simplistic fixed ratio scheme only and not some kind of drastic indexing service to track down bits that go on variable ratios leads to a situation where space is wasted on equivalent capacity but slower speed ratiod drives as faster drives will be able to fill to capacity (if same capacity) at an uneven rate. For connections with large speed differences, the capacity loss would be equivalent in percentage mind you to the percentage of difference in speed.. so a drive that only runs maximumly 10% as fast will only get to use its first 10%, wasting completely the last 90%.. hence probably the reason why nobody has attempted such a thing. On the other hand, if the drive that's slowest was only about 10% of the capacity anyways, then this loss wouldn't be a loss and you just took 10% of the workload off your largest drive for free....
Xplain's use of MacNews, AppleCentral and AppleExpo are not affiliated with Apple, Inc. MacTech is a registered trademark of Xplain Corporation. AppleCentral, MacNews, Xplain, "The journal of Apple technology", Apple Expo, Explain It, MacDev, MacDev-1, THINK Reference, NetProfessional, MacTech Central, MacTech Domains, MacForge, and the MacTutorMan are trademarks or service marks of Xplain Corp. Sprocket is a registered trademark of eSprocket Corp. Other trademarks and copyrights appearing in this printing or software remain the property of their respective holders.
All contents are Copyright 1984-2010 by Xplain Corporation. All rights reserved. Theme designed by Icreon.