# which nas to get - pytivo, streambaby, etc



## johnh123 (Dec 7, 2000)

So i'm going to be getting a new nas. Consumer level, only need two drives for mirroring. I want to be able to stream to tivo premiere as well, ideally, or at least use pytivo- I don't want to have to keep a computer running all the time for that. Which nas can I easily set up pytivo on, and are there any consumer level nas that can run streambaby?


----------



## jcthorne (Jan 28, 2002)

There are several that could fit your requirements but the lower end Synology NAS units are well supported both here and at the Synology forums and will do what you want. That is, other than transcoding video on the fly. The low end units just don't have the processing power to do it. But Pytivo, vidmgr and jukebox all run well on the linux based Synology units and they are well regarded NAS units as well.

I will say that you should choose wisely in choosing the size of your NAS. Once all this works well for you, you may very well find your storage needs expanding. Its far cheaper to buy a NAS that can be expanded as your needs grow than to buy one that cannot and have to do it again. Been very happy with my 1511 that I started with 3 1TB drives and now at 7 3TBdrives, each upgrade and addition handeled by the NAS smoothly and without down time. The dynamic hybrid RAID functionality is very useful.


----------



## Iluvatar (Jul 22, 2006)

I have a Synology DS411slim with 4x2tb drives and a couple external USB/eSata drives. Running pyTivo on it along with some helpful utilities like sickbeard and transmission. Works just great. FFmpeg is really slow on these devices. While recompiling a custom FFmpeg build does help slightly, it is usually best to make sure what you put on the NAS to feed pyTivo is either TiVo compatible or remuxable by pyTivo for quick transfers.

I don't use streambaby but know that it requires Java. I am uncertain how Java performs or is installed onto the NAS devices so I would look into that if streambaby is a requirement.


----------



## jcthorne (Jan 28, 2002)

Java on the NAS would be a problem. Would suggest using vidmgr and pytivo to effectively replace the function of streambaby from the NAS.


----------



## wmcbrine (Aug 2, 2003)

You could also try my HME/VLC as a partial replacement for Streambaby. Despite the name, it can run without VLC. It will work in the same environment as the other programs mentioned above (i.e. just add Python). It's only a partial replacement because it lacks the ability to rebuffer files over 1.1 GB that Streambaby has, among other things. On the other hand, it handles RSS feeds directly (and live streams, but you probably don't want to try those without VLC).


----------



## johnh123 (Dec 7, 2000)

I'm looking at the 212+. Drives are hot swappable so it seems I could start with 2x1 tb and expand over time up to 2x4 tb. By the time I need more than that it will be time for a new nas.

If I have say an mkv file, h264, aac 2.0, would it take a lot of time for pytivo to transcode that, or could you begin viewing within a reasonable amount of time?


----------



## wmcbrine (Aug 2, 2003)

If you use the "push" system, recent versions of pyTivo will remux that to an MP4 in practically no time, and send it to the TiVo without having to transcode it. However you then add an unknown delay due to the nature of push (it depends on TiVo.com's servers).


----------



## jcthorne (Jan 28, 2002)

My experience has been that the h.264 video DEPENDS on its encoding levels. The video rendering chip in the tivo is somewhat limited. For 720p, level 4.1 with 5 ref frames is usually reliable and plays without diffeculty. For 1080p24 and 1080i30, level 4.1 with NO MORE than 4 reference frames or Tivo has a high likelyhood of not working well. H.264 level 5 is not compatible.

In order to keep the WAF as high as possible, I prefer to prep all video on the server to tivo compatible files before pytivo ever sees them and avoid hicups at play time. Many will remux and play fine on the fly as pytivo is designed to do. I prefer to eliminate the 10% or more that do not.

All of my video is stored as mp4 files with ac3 audio, level 4.1 with 4 ref frames. These files also play as is on my WDTV and EVO 3D smartphone.

Tivo plays ac3 5.1 or aac 2.0. It cannot handle aac 5.1 or DTS although pytivo can recode the audio pretty quickly.


----------



## Iluvatar (Jul 22, 2006)

jcthorne said:


> My experience has been that the h.264 video DEPENDS on its encoding levels. The video rendering chip in the tivo is somewhat limited. For 720p, level 4.1 with 5 ref frames is usually reliable and plays without diffeculty. For 1080p24 and 1080i30, level 4.1 with NO MORE than 4 reference frames or Tivo has a high likelyhood of not working well. H.264 level 5 is not compatible.


I agree based on my experience. I do wish FFmpeg reported in greater detail so the info could be parsed by pyTivo and checked against.


----------



## wmcbrine (Aug 2, 2003)

Recent FFmpegs do provide more detail, if we can parse it --



> Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'oncgaga.mp4':
> Metadata:
> major_brand : isom
> minor_version : 512
> ...


I think the highlighted bits may be relevant.


----------



## jcthorne (Jan 28, 2002)

The mp41 might be the level but it does not appear to show the number of reference frames used in the stream. Not sure where else pytivo could get this info. I use a utility *mediainfo* but its windows based.


----------



## Iluvatar (Jul 22, 2006)

jcthorne said:


> The mp41 might be the level but it does not appear to show the number of reference frames used in the stream. Not sure where else pytivo could get this info. I use a utility *mediainfo* but its windows based.


mediainfo works on Linux and OS X as well. pyTivo could definitely parse the output from it, but it would be yet another binary dependency that would require the user to download and provide to pyTivo.

It looks like the FFmpeg output could at least be used for determining if the video is >L4.1. However it seems the information is not reliably provided for every h264 video and relies on the original encoder to provide the metadata.


----------



## johnh123 (Dec 7, 2000)

Now I see the netgear readynas ultra 2 plus can be had for about the same as the synology 212+ - the netgear has a dual core atom processor and 1gb of ram - much better specs than the synology- anyone have any experience with pytivo on the netgear box?


----------



## lrhorer (Aug 31, 2003)

johnh123 said:


> I'm looking at the 212+.  Drives are hot swappable so it seems I could start with 2x1 tb and expand over time up to 2x4 tb.


Oh, hey! Sure enough, Hitachi has some 4T drives available. When did those get released? I missed that one.

Anyway, to your suggestion, I would avoid drive expansions, if I were you. They certainly can be done, and I have done a few myself, but I would recommend spindle expansions, rather than drive expansions. Of course that also means abandoning your plan to deploy RAID1. I would recommend going with RAID5 or RAID6 and increasing the number of spindles for growth purposes, rather than swapping to larger drive sizes. When it does come time to increase the drive size, I would recommend building a whole new array with fewer spindles and copying the data over to the new array. Note a second array does not necessarily require a new chassis, as long as the chassis has room for the additional spindles.



Proszell said:


> By the time I need more than that it will be time for a new nas.


I wouldn't really bet on it. It's amazing how fast data can expand. Of course your needs may well be different, but in the beginning my arrays expanded a lot faster than 2T per year. In your case, that would mean you could be looking at your first expansion in less than six months, and a new NAS in less than 2 years. It's up to you, but I would plan for more aggressive expansion, especially at first.



Proszell said:


> If I have say an mkv file, h264, aac 2.0, would it take a lot of time for pytivo to transcode that, or could you begin viewing within a reasonable amount of time?


Well, in pull mode, one can always start viewing immediately. What happens is one soon encounters a pause if the server cannot keep up with the program bit rate. In push mode, the TiVo will enforce enough buffering that one should not encounter any pauses, but this does mean one may not be able to start viewing immediately. Note that with S3 class TiVos, recoding the video to h.264 in a .mp4 container before hand will allow the data to transfer via push much, much faster without transcoding on the fly.


----------



## johnh123 (Dec 7, 2000)

Ok, looks like i'm going with the ds411+. Room for growth and if I can run sabznzb, pytivo and vidmgr I think I should be set for some time. I think I will start with 2x1.5T Raid 1, then when I add another 1.5T I will have Raid 5 3T, then when I add another I will have Raid 5 4.5T, if I understand it correctly. I'm not really one who keeps movies after I watch them, so this is primarily for photos and home movies.


----------



## lrhorer (Aug 31, 2003)

If it were me, I would avoid an online RAID migration. I would start with RAID5 or RAID6 with a missing member. You can do 2 x 1.5T RAID5 / 6 , which with 2 spindles will give you the same amount of storage as RAID1.


----------



## johnh123 (Dec 7, 2000)

Will synology let me set up a raid 5/6 with only two drives? 'What about shr- recommended or not?


----------



## jcthorne (Jan 28, 2002)

The Synology system will let you do better than that. Setup the 2 drives as a Synology Hybrid RAID to start. This will allow you to add drives and increase the size of the drives as you go without the need for a full rebuild. It does the rebuild in the background over a couple of hours/days. The size of the Hybrid raid volume will always be total size of all drives, less the size of the largest drive. Always with single redundancy. It supports dual redundancy as well but on a 4 bay machine kind of pointless. Its a very stable and well done system.


----------



## lrhorer (Aug 31, 2003)

jcthorne said:


> The Synology system will let you do better than that. Setup the 2 drives as a Synology Hybrid RAID to start. This will allow you to add drives and increase the size of the drives as you go without the need for a full rebuild.


As a veteran of more RAID reshapes than I care to recall, I can say without reservation that just because a system supports (or claims to support) OLRM and various non-standard geometries is not a good reason for the user to try to take advantage of those features. Even under the best circumstances, a RAID re-shape is not a trivial thing. Typically, every single byte of data must be read and re-written to new locations. The more complex the re-map, the more frail it is.

If anyone cares to doubt me, I suggest they subscribe to the Linux-RAID mailing list for a few weeks.



jcthorne said:


> It does the rebuild in the background over a couple of hours/days.


That's not the issue. Virtually all RAID reconfiguration, and certainly any OLRM is accomplished in the background. The amount of time an OLRM will take depends on how large the RAID members are and how fast they can be read and written. A typical array with 1T members will take at a minimum about a day to a day and a half to re-shape. The entire time, the array is in jeopardy, and anything from a drive failure on down can potentially trash the entire array. With 3T members, expect it to take closer to a week. The odds of one out of four drives failing during any given week are not reassuringly low.



jcthorne said:


> The size of the Hybrid raid volume will always be total size of all drives, less the size of the largest drive. Always with single redundancy. It supports dual redundancy as well but on a 4 bay machine kind of pointless.


That depends on his needs. Some people demand triple or even qudaruple redundancy. RAID1 (and RAID10) certainly supports multiple mirrors. RAID5 allows a good compromise for moderate sized arrays. RAID6 allows one to achieve something closer to the redundancy of RAID1 with a level of economy more like RAID5.


----------



## lrhorer (Aug 31, 2003)

johnh123 said:


> Will synology let me set up a raid 5/6 with only two drives? 'What about shr- recommended or not?


I can't speak specifically to the Synology, but a RAID5 array can certainly in general be built from any number of spindles, including just one (with one missing). Similarly, a RAID6 array with one member missing can be built from 2 drives. Adding the 3rd member should cause the array to automatically resync.


----------



## jcthorne (Jan 28, 2002)

lrhorer said:


> That's not the issue. Virtually all RAID reconfiguration, and certainly any OLRM is accomplished in the background. The amount of time an OLRM will take depends on how large the RAID members are and how fast they can be read and written. A typical array with 1T members will take at a minimum about a day to a day and a half to re-shape. The entire time, the array is in jeopardy, and anything from a drive failure on down can potentially trash the entire array. With 3T members, expect it to take closer to a week. The odds of one out of four drives failing during any given week are not reassuringly low.


Using the Synology Hybrid RAID this has not been the case. During one of my expansions, from 6 3T drives to 7, about 50% of the way through, one of the drives failed. I did not loose the array. I replaced the drive (drive 2 in this case) and the rebuild started. Then the expansion restarted. The redundancy is expnaded to the new drive before the array is expanded.

Also expansion from 5 to 6 took approx 36 hrs. Not a week. Total time for 6 to 7 was 3 days but that was because of the failure.

I am not saying its perfect, nor that there is not increased risk while the array is expanded, just that for most incidents, its not a total loss. I do and highly recommend that a full backup be in place prior to any array reconfig.

Also I really saw no difference between an expansion by increasing a drive size vs adding a drive, either operation took about the same time to complete.


----------



## lrhorer (Aug 31, 2003)

jcthorne said:


> Using the Synology Hybrid RAID this has not been the case. During one of my expansions, from 6 3T drives to 7, about 50% of the way through, one of the drives failed. I did not loose the array.


No RAID reshape is ever supposed to lose data, but that does not mean it doesn't happen. Take it from someone who has suffered through well over a dozen array failures. Note an expansion from N to N + 1 drives all of uniform size does not have as many critical phases, compared with some OLRM operations.



jcthorne said:


> I replaced the drive (drive 2 in this case) and the rebuild started. Then the expansion restarted.


No, it resumed. It is not possible to miraculously instantaneously revert to the old structure. While the rebuild was occurring, part of your array had N members, and part had N + 1 members. Had the OS lost track of exactly which part had N members and which part had N + 1 members, your array would have been hosed. Fortunately in your case the failed operation was not the one which kept track of how far the expansion had progressed.

The most likely cause of such a failure would be a power failure or a drive controller failure, possibly accompanied by or caused by a drive failure.



jcthorne said:


> The redundancy is expnaded to the new drive before the array is expanded.


That depends on the type of re-shape and exactly when it fails. Take for example an expansion from 4 data drives to 5 data drives on any RAID level higher than 1. Every bit of data on the drives must be re-organized and re-written. Ignoring parity for the moment, prior to the re-shape, the data is divided up into 4 sets of moderatly sized chunks. Member 1 contains chunks 1, 5, 9, 13, 17, etc. Member 2 contains chunks 2, 6, 10, 14, 18, and so forth. After the re-shape is complete, the data is divided up into 5 sets, not 4. As the re-shape progresses, chunk 5 is moved from member 1 to member 5. Chunk 6 is moved from member 2 to member 1, and so on. Going from 4 drives to 5, less than 17% of the information originally on any member winds up back on that same member (allowing for parity). The superblock contains the information that tells the OS driver what the organization is, but during the re-shape, some fraction of the data in the array is no longer organized that way, so the OS has to keep track of what portion of the array does not match the superblock, which is re-written either at the beginning or the end of the migration. Obviously, this information is stored somewhere and updated every time the blocks are moved around on the drive members. If that information is corrupted (perhaps by a failed write to a drive in the array), then POOF! goes the entire array, or at least some portion of it.

Keep in mind as well that for RAID levels greater than 1, parity must also be re-calculated and written. In the case of RAID6, at least twice the amount of parity data is calculated and written compared to RAID5. A 5 member RAID5 array is divided into 4 sets of data information and 1 of parity. When the array is expanded to 6 members, there are now 5 sets of data completely different in organization to the original 4 plus a parity that is completely different in every aspect than the original parity, being a checksum of chunks 1 - 5, 6 - 10, 11 -15, etc., rather than of 1 - 4, 5 - 9, 10 - 14, etc.



jcthorne said:


> Also expansion from 5 to 6 took approx 36 hrs. Not a week.


That depends on a number of factors. The time it takes to re-sync the array is directly proportional to the member size and inversely proportional to the write speed of the drives being written. An array built of small, fast drives will re-sync much, much faster than one built of larger, slower drives. It also depends on how much of the data is being re-located. In addition, all the most popular RAID management software limits the amount of bandwidth allocated to the re-shape so that users do not complain about slow file access. Many admins will go in after hours and increase the resource limits in order to speed up the array re-shape and then back the bandwidth down again during working hours so they don't impact the users. Of course, in the case of an array in someone's home, doling out dribs and drabs of data for things like a TiVo, one may choose to simple let the re-shape rip. In any case, however, a consumer class drive can probably manage a continuous write of about 30 MB/Sec or so. On a 1T drive, that works out to no less than 9.3 hours at maximum speed. A 3T drive stretches that to 27.3 hours. If the drive is being read as well as written, it can easily double that to more than 55 hours, and that is best case.

The brand or type of array is not terribly relevant, since generally speaking it is the drive that limits the re-sync performance, not usually the array.



jcthorne said:


> I am not saying its perfect, nor that there is not increased risk while the array is expanded, just that for most incidents, its not a total loss.


That "most incidents" are not fatal is poor consolation to anyone whose array is toast. The wise person will eliminate as much as possible the most dangerous paths by not taking them in the first place, especially not if the path would be taken merely for convenience' sake.



jcthorne said:


> Also I really saw no difference between an expansion by increasing a drive size vs adding a drive, either operation took about the same time to complete.


Adding a drive most of whose space does not get consumed by the array takes precisely the same amount of time as adding a drive of an appropriate size. Syncing a 10T array built of five 2T drives will take at least twice the time that syncing a 10T array built of ten 1T drives will, for the same sync operation. Again, the principal limit is the size of the members divided by the speed of the members.


----------



## MichaelK (Jan 10, 2002)

Back to the original question I've always been impressed by the HP N40L. You basically get a barebones nas box without an OS. You can put freenas on it (for free) or windows home server (for around 50 bucks) or win7 or pretty much any Linux you want. Just lots of options. If you watch there are frequently deals for the box or the box with extra memory or with a second drive. 

Since it takes "normal" os' any app you want can run on them.

The raid choices are limited off the shelf but I think Freenas has some sort of software raid. Whs has pay add ins that give lots of raid like options similar to the old whs drive extender feature. 

If you prefer a real hardware raid the box takes pci express cards so you can add it easy.

All that in a nice small package.

Just tossing it out there.


----------



## hefe (Dec 5, 2000)

Just a data point...my NAS is unRaid in an HP microserver case. The PC that I installed pyTivo on yesterday has the shares mapped as drives. I put in the Movies share in the pyTivo configuration, and it's working just fine as far as I can tell.


----------



## lrhorer (Aug 31, 2003)

hefe said:


> Just a data point...my NAS is unRaid in an HP microserver case. The PC that I installed pyTivo on yesterday has the shares mapped as drives. I put in the Movies share in the pyTivo configuration, and it's working just fine as far as I can tell.


There's nothing particularly wrong with such a deployment, but I prefer to load all the servers on the same machine that hosts the array. It certainly will work, however, to load the servers on an external machine and host the files from a machine dedicated to nothing more than providing the storage.


----------



## hefe (Dec 5, 2000)

It may be a next step to figure out how to load servers on the unRaid system, which is Linux based...but one step at a time...still learning how all this stuff plays together!


----------



## jcthorne (Jan 28, 2002)

lrhorer said:


> No RAID reshape is ever supposed to lose data, but that does not mean it doesn't happen. Take it from someone who has suffered through well over a dozen array failures. Note an expansion from N to N + 1 drives all of uniform size does not have as many critical phases, compared with some OLRM operations.
> 
> Adding a drive most of whose space does not get consumed by the array takes precisely the same amount of time as adding a drive of an appropriate size. Syncing a 10T array built of five 2T drives will take at least twice the time that syncing a 10T array built of ten 1T drives will, for the same sync operation. Again, the principal limit is the size of the members divided by the speed of the members.


lrhorer, you know more about RAID array internal archetecture than I ever want to know. Was just relateing what I have observed as a user of the Synology Hybrid Raid system on my NAS. As for the risk, I look at it a bit differently. Since I will not endeaver into an array reshape without a complete backup, my risk is one of loss of time not loss of data. If the reshape fails, I have to do a long and tedious restore. If it succeeds, I save time and have an up to date backup. In this case its ALL about convienience, I do not have other users to worry about nor is the loss of the system for a few days going to be catastrophic. The convienience of expanding the array as my storage needs grow is very useful to me. There is risk, but in my case the risk of loss of data is pretty small.

On the expansion times. I know at least on my 1511, all 7 drives are working at the same time in parrallel during a expansion. It also dynamicly allocates resources to maintain file access speeds vs background tasks. If I dont use the files, the process goes much faster. IE it makes much more progress during the overnight hours without me having to do anything to reallocate resources.

On the redundancy, when a drive is added to an exisiting N+1 array. The first task is the array becomes an N+2 )dual redundancy. It then expands the array block by block recalculating the parity such that as it goes, each becomes an n+1 block again. At no time is there any blocks that are n+0. Yes the allocation tables could get corrupted but those are n+2 redundant (3 identical tables) to the very end and then n+1 when the operation is complete. This way if one does end up corrupted during the expansion, the nas knows what the right answer is. Simple 2oo3 voting.

The SHR allows the expansion by enlarging a drive already in the array. IE if there are 2ea 3T drives and 3ea 2T drives in the array with a capacity of 9T with n+1 redundancy, I can replace one of the 2t drives with a 3t and end up with a 10T n+1 array.

These automated array management and expansion capabilities are the main reason I ended up going with a Synology NAS vs many of the others. I actually considered a DROBO box for a week or so but ended up droping it due to its lack luster performance and closed environment.


----------



## lrhorer (Aug 31, 2003)

jcthorne said:


> lrhorer, you know more about RAID array internal archetecture than I ever want to know. Was just relateing what I have observed as a user of the Synology Hybrid Raid system on my NAS. As for the risk, I look at it a bit differently. Since I will not endeaver into an array reshape without a complete backup, my risk is one of loss of time not loss of data. If the reshape fails, I have to do a long and tedious restore. If it succeeds, I save time and have an up to date backup. In this case its ALL about convienience, I do not have other users to worry about nor is the loss of the system for a few days going to be catastrophic. The convienience of expanding the array as my storage needs grow is very useful to me. There is risk, but in my case the risk of loss of data is pretty small.


Your point is taken. Indeed, there is a difference between the loss of the data and the loss of an array. Sometimes the array or a part of it is recoverable without requiring a restore from backup.

The salient point here is the OP needs to take this into account one way or the other. The worst mistake made by noobs (and it is a very common one) is to think of a RAID array as fault-proof. It is fault-tolerant, not fault-proof. Having a RAID array (even a multiple spindle RAID1 array) does not eliminate the need for a good, comprehensive backup strategy.



jcthorne said:


> On the expansion times. I know at least on my 1511, all 7 drives are working at the same time in parrallel during a expansion.


This should normally always be the case, but the limit is still how fast the data can be written to the slowest member. Writing 1T of data to a 1T hard drive takes a very specific amount of time.



jcthorne said:


> It also dynamicly allocates resources to maintain file access speeds vs background tasks.


Typically a minimum and maximum bandwidth is specified for the sync operation. Lowering the maximum will make sure more resources are available for real-time access. Raising the minimum will force the array to spend more resources on the re-sync operation. Anything between the two parameters is up for grabs.



jcthorne said:


> If I dont use the files, the process goes much faster.


Well, somewhat faster, depending on the level of file access. What really can add up is the seek times. Even for the re-sync itself, the drive heads are having to swing back and forth across the platters to first read several sectors form one part of the drive and then write a similar amount of information to a different part of the drive. Add in a random seek to serve up data from the array every few ms, and it can really bog down the resync process.



jcthorne said:


> On the redundancy, when a drive is added to an exisiting N+1 array.


'Sorry, by "N + 1", I did not mean an array with N data members plus 1 parity. I meant an array with N data members migrating to an array with N + 1 data members.



jcthorne said:


> The first task is the array becomes an N+2 )dual redundancy.


That is more or less an expansion to RAID6 from RAID5. It doesn't increase the space on the array. More importantly, none of the blocks would be in the correct order. Expanding a RAID5 array from 5 members to 6 requires reading chunks 1 - 5 (reading member #1 twice for chunks 1 and 5), calculating parity, writing it to member #6, and then writing chunk #5 to member #5, overwriting the parity formerly stored there. Next, chunks 6 - 10 are read (this time reading member #2 twise for chunks 6 and 10), parity is calculated and written to member #5, and then chunk 6 is written to member 1, chunk 7 is written to member #2, ... and chunk 10 is written to member #6. Next, chunks 11 - 15 are read, parity is written to member #4, and the data is written back to the members starting with #3. If the re-sync routine is written properly, it is true it is easy to maintain parity so there is always at least one copy of the parity, but that doesn't prevent the OS frojm losing the information that tells it how many of the chunks have been converted from 5 members to 6.

RAID3 and RAID4 assign an actual drive for parity, but RAID5 and RAID6 employ distributed parity, which means on a 6 member RAID5 array, 20% of the parity is written to each drive.



jcthorne said:


> It then expands the array block by block recalculating the parity such that as it goes, each becomes an n+1 block again.


The parity has to be re-calculated for every block as it changes from N chunks to N + 1 chunks.

Suppose we have the following set of data, and that each chunk is only 8 bits. When we start out, the blocks look like this:

1 10000000
2 11000000
3 11100000
4 11110000
P 10101111
-----------
5 00000001
6 00000011
7 00000111
P 11110101
8 00001111
-----------
9 00011111
A 00111111 ...

When we expand those blocks onto six drives, they now look like this:

1 10000000
2 11000000
3 11100000
4 11110000
5 00000001
P 10101110
-----------
6 00000011
7 00000111
8 00001111
9 00011111
P 11010100
A 00111111
-----------



jcthorne said:


> At no time is there any blocks that are n+0.


Yeah, that's easy enough. All that is required is the information be written to empty sectors, and then the pointers updated to show the new locations. The big problem occurs if an error occurs when the pointers are being updated.



jcthorne said:


> Yes the allocation tables could get corrupted but those are n+2 redundant (3 identical tables) to the very end and then n+1 when the operation is complete.


RAID doesn't have allocation tables. The superblock defines the extent of the array and its organization. That's why it only takes a moment to create an array, as well as why the size of the array is fundamentally identical to the total size of the members. If the superblock says it is a RAID5 array with 6 members, then the RAID driver assumes member #1 will contain chunks 1, 6, 11, 16, etc. If that organization changes half-way throug the array, then a "ghost" superblock that tells the the driver it needs to start looking for 6 chunks per block, rather than 5 starting at block #2000 needs to be created and maintained. OF course the superblock is small, and can be easily duplicated multiple times, but what does the driver do if two of the copies of the "ghost" superblock disagree on where the 6 chunk blocks start?



jcthorne said:


> The SHR allows the expansion by enlarging a drive already in the array. IE if there are 2ea 3T drives and 3ea 2T drives in the array with a capacity of 9T with n+1 redundancy, I can replace one of the 2t drives with a 3t and end up with a 10T n+1 array.


And badly degraded performance, not to mention unbalanced I/O loading. It's a good way to grind away at the 3T spindles. It most certainly can be done, but a better strategy is to buy drives all of one size, that being the lowest cost per GB.

At this point one should note the member size and the spindle size do not have to be the same under any RAID implementation. For example, my main array consisted of fourteen 1T spindles, while the backup consisted of ten 1.5T spindles. When I upgraded my main array, I purchased eight 3T drives (before the flood, thank goodness!) and copied the data over from the fourteen spindle array to a new array built of the 3T spindles. Suddenly, I had a secondary array that was a bit too small, and fourteen spare 1T drives. I took four of the 1T drives and built a pair of 2T RAID0 arrays. I then took those arrays and attached them to the array built of 1.5T spindles. It means 500G of each pair of 1T drives is unused, but it also means any time I like I can replace a pair of 1T drives with a single 1.5T drive, if need be. As an aside, since the RAID0 arrays are striped across two drives, those 2T members are very fast compared with the 1.5T drives.


----------



## johnh123 (Dec 7, 2000)

jcthorne said:


> Java on the NAS would be a problem. Would suggest using vidmgr and pytivo to effectively replace the function of streambaby from the NAS.


I see that a number of packages available for the atom synology use java. If the synology can handle serviio, I'd think it could handle streambaby.


----------



## jcthorne (Jan 28, 2002)

lrhorer said:


> Your point is taken. Indeed, there is a difference between the loss of the data and the loss of an array. Sometimes the array or a part of it is recoverable without requiring a restore from backup.
> 
> And badly degraded performance, not to mention unbalanced I/O loading. It's a good way to grind away at the 3T spindles. It most certainly can be done, but a better strategy is to buy drives all of one size, that being the lowest cost per GB..


Thanks for the great explainations. I learn more as this goes on. I have to admit most of what I 'think' I know about RAID comes mostly from advertising and instructions manuals, not an intimate knowledge of the internals.

One item I can relate from the above is with regard to an array across mixed size drives, a combo of 2T and 3T drives in my case. While performance may be degraded internal to the NAS, from an external user standpoint where data rates are limited to a dual gigabit pipe, there was no difference in speed for the mixed array vs the array at all 3T spindles. I can consistantly move (2) 85+ MBps streams between the NAS and 2 users. (each limited to a single GBit connection.) Perhaps its my network that needs a bit of improvement, not the NAS.

I guess my point was only that I have been pretty impressed with my Synology NAS ability to 'take care of itself' from a RAID managment point of view and allow me to grow the array as my needs increase. I know there is a lot going on under the hood. Thanks again for pulling back the curtin so we can see a bit of it.


----------



## lrhorer (Aug 31, 2003)

jcthorne said:


> One item I can relate from the above is with regard to an array across mixed size drives, a combo of 2T and 3T drives in my case. While performance may be degraded internal to the NAS, from an external user standpoint where data rates are limited to a dual gigabit pipe, there was no difference in speed for the mixed array vs the array at all 3T spindles.


That can well be true. Note performance isn't necessarily the only issue, however. The fact the 3T spindles are being accessed 50% more often than the 2T spindles, and the fact the heads on the 3T spindles are on average jumping around a great deal more than the 2T spindles may have repercusions in terms of longevity. From my perspective, though, the big fly in the ointment is the increased complexity of the system at the member level and the consequent increase in frailty of the system. That doesn't mean the system WILL enounter a failure for any given user, it is just more likely. Perhaps more importantly, when a failure does occur, the more complex system generally presents the user with a more difficult recovery process.



jcthorne said:


> I can consistantly move (2) 85+ MBps streams between the NAS and 2 users. (each limited to a single GBit connection.) Perhaps its my network that needs a bit of improvement, not the NAS.


'Depends on your needs. If all you are servicing is a handful of TiVos and a few user workstations, then a pair of 1G connections may well be more than enough. Indeed, for most home users, 190 MBps is more than plenty.

Again, I'm not saying the user can't make use of such features, or even that he is foolish for doing so, just that I do not recommend it.


----------



## johnh123 (Dec 7, 2000)

OK, i've got the 1411+ up and running. Installed pytivo following the thread over at the pytivo forums. It does not start automatically. 

If I use putty to go in and enter '/opt/bin/python /usr/local/pyTivo/pyTivo.py > /dev/null 2>&1 & ' 

then it will run. If I exit putty, then it stops. How do I keep it running all the time?


----------



## wmcbrine (Aug 2, 2003)

Move the "2>&1" before the ">/dev/null", I think, but definitely add "</dev/null" (to redirect input as well as output) before the final "&". Although personally, I like to run it under "screen" instead.

As for running it at startup, I'll leave that explanation to someone who does that...


----------



## lrhorer (Aug 31, 2003)

johnh123 said:


> OK, i've got the 1411+ up and running. Installed pytivo following the thread over at the pytivo forums. It does not start automatically.
> 
> If I use putty to go in and enter '/opt/bin/python /usr/local/pyTivo/pyTivo.py > /dev/null 2>&1 & '
> 
> then it will run. If I exit putty, then it stops. How do I keep it running all the time?


Well, the short answer is you need to detach the child process from its parent so the child will remain in memory after the parent terminates.

Typically, this is done via the use of an external utility such as nohup. Nohup will detach the child process and optionally redirect the console output to a log file.

This is useful both from a terminal session and at startup, although startup works a bit differently from system to system. If nohup is not on your NAS, then I suggest you get a copy - either one from the distro used by your NAS or else compile it manually. To see if nohup is on your NAS, type:


```
which nohup
```
It will either return a fully qualified filename (meaning nohup is on the machine), or nothing at all (meaning it isn't).

What distribution is your NAS using? Does it employ System V init scripts? If you don't know the answer to those questions, look in /etc. Is there is a directory named init.d and several named rcS.d, rc1.d, rc2.d, and so forth? Those contain the startup and shutdown scripts on a System V init machine. If not, then there probably is a directory named /etc/rc.d which contains the startup scripts.

Are the man pages stored on your NAS? That can make it easier.


----------



## Iluvatar (Jul 22, 2006)

johnh123 said:


> OK, i've got the 1411+ up and running. Installed pytivo following the thread over at the pytivo forums. It does not start automatically.
> 
> If I use putty to go in and enter '/opt/bin/python /usr/local/pyTivo/pyTivo.py > /dev/null 2>&1 & '
> 
> then it will run. If I exit putty, then it stops. How do I keep it running all the time?


A little convoluted but this is how I have mine set up to start on boot (based on a script from the Synology forums). I never bothered to simplify it but it could use it. Works just fine though.

In pyTivo directory I have 'start.sh':

```
sleep 10
/opt/bin/python /usr/local/pyTivo/pyTivo.py > /dev/null 2>&1 &
```
In /usr/local/etc/rc.d/ I have 'pyTivoSyno.sh' which starts and stops the service:

```
RETVAL=0

start() {
echo -n ""Starting pyTivo: ""
ps | grep pyTivo.py | grep python
RETVAL=$?
[ $RETVAL -eq 0 ] && echo ""pyTivo already running: Exiting"" && exit 1

sh /usr/local/pyTivo/start.sh > /dev/null 2>&1 &
RETVAL=$?
[ $RETVAL -eq 0 ] && echo -n ""...pyTivo will start in 10s""
echo
return $RETVAL
}

stop() {
echo -n ""Stopping pyTivo: ""
kill -9 `ps |grep pyTivo.py |grep python | awk -F" " '{print $1}'`
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && echo -n ""done""
echo
return $RETVAL
}

case ""$1"" in
start)
start
;;
stop)
stop
;;
restart|reload)
stop
sleep 1
start
RETVAL=$?
;;
*)
echo ""Usage: $0 {start|stop|restart}""
exit 1
esac
exit $RETVAL
```
In /etc/rc.local file I have this:

```
#!/bin/sh

# Optware setup
[ -x /etc/rc.optware ] && /etc/rc.optware start
sh /usr/local/etc/rc.d/pyTivoSyno.sh start > /dev/null 2>&1 &
sleep 15

exit 0
```
Make sure to 'chmod 755' all the above files. Restart. About 60s after the beep you should be able to see the pyTivo server.

If using ssh to login and work on pyTivo:

This will shutdown pyTivo:

```
sh /usr/local/etc/rc.d/pyTivoSyno.sh stop
```
This will start pyTivo without having to reboot the box:

```
sh /usr/local/etc/rc.d/pyTivoSyno.sh start
```


----------



## lrhorer (Aug 31, 2003)

Iluvatar said:


> A little convoluted but this is how I have mine set up to start on boot (based on a script from the Synology forums). I never bothered to simplify it but it could use it. Works just fine though.
> 
> In pyTivo directory I have 'start.sh':
> 
> ...


Personally, I would recommend not redirecting to /dev/null. I redirect to a log file in /var/log.



Iluvatar said:


> In /usr/local/etc/rc.d/


OK, so it's not System V. Pity. Dependency based booting is so much more bulletproof / futureproof under System V.


----------



## Iluvatar (Jul 22, 2006)

lrhorer said:


> Personally, I would recommend not redirecting to /dev/null. I redirect to a log file in /var/log.


True but I have no use for the output on my NAS. If there is an issue (rarely) I will just login and run it from the terminal to see what went wrong.


----------



## lrhorer (Aug 31, 2003)

Iluvatar said:


> AIn /usr/local/etc/rc.d/ I have 'pyTivoSyno.sh' which starts and stops the service:
> 
> ```
> RETVAL=0
> ...


Uh, hold on. I must be missing something, here. I don't see where / how this is detaching the child process. Running this script from a terminal session should cause pyTivo to exit when the terminal is closed.


----------



## johnh123 (Dec 7, 2000)

lrhorer said:


> It will either return a fully qualified filename (meaning nohup is on the machine), or nothing at all (meaning it isn't).
> 
> What distribution is your NAS using? Does it employ System V init scripts? If you don't know the answer to those questions, look in /etc. Is there is a directory named init.d and several named rcS.d, rc1.d, rc2.d, and so forth? Those contain the startup and shutdown scripts on a System V init machine. If not, then there probably is a directory named /etc/rc.d which contains the startup scripts.


This is a synology running dsm 4.0 - I will try the scripts Iluvatar posted.


----------



## lrhorer (Aug 31, 2003)

Iluvatar said:


> True but I have no use for the output on my NAS. If there is an issue (rarely) I will just login and run it from the terminal to see what went wrong.


Except that by then whatever went wrong may no longer be wrong.


----------



## lrhorer (Aug 31, 2003)

johnh123 said:


> This is a synology running dsm 4.0 - I will try the scripts Iluvatar posted.


They shoud work fine for starting automatically, but I am having trouble seeing how a manual restart would work properly.


----------



## Iluvatar (Jul 22, 2006)

lrhorer said:


> Uh, hold on. I must be missing something, here. I don't see where / how this is detaching the child process. Running this script from a terminal session should cause pyTivo to exit when the terminal is closed.


Looking at it you are right. I typically only use the start on boot function and not the sh service stop/start commands so I never came across it.


----------



## johnh123 (Dec 7, 2000)

Now I think i've taken a step backwards. I have the following as my rc.local in /etc

#!/bin/sh

# Optware setup
[ -x /etc/rc.optware ] && /etc/rc.optware start

sh /usr/local/etc/rc.d/pyTivoSyno.sh start > /dev/null 2>&1 &
sleep 15

exit 0

So not only does pytivo not start, neither does optware, so to get pytivo going, I have to first do this:

[ -x /etc/rc.optware ] && /etc/rc.optware start

then this:

sh /usr/local/etc/rc.d/pyTivoSyno.sh start > /dev/null 2>&1 &

then it will last as long as my putty session is going. Any ideas as to where to look?


----------



## lrhorer (Aug 31, 2003)

I think I rest my case concerning redirection to /dev/null.

If optware is not running, then it is likely something is wrong with rc.local.

There are many things that could be wrong, but let's try the obvious, first.

1. What are the permissions for rc.local? Type


```
ls -l /etc/rc.local
```
2. Did you create the rc.local file or edit an existing one? Either way, what utility did you employ to create / update the file? Is it possible you inadvertently inserted <CR><LF> codes?

3. What is the result of typing the following command:


```
/etc/rc.local
```
4. What are the contents of /etc/inittab?

5. I am not as familiar with RedHat / Fedora derivatives as with Debian, but it strikes me rc.local may not be the best place to initialize pyTivo. I do know that rc.local is being deprecated.

6. Are you guys sure rc.local should be in /etc? A number of distros put it in /etc/rc.d.


----------



## johnh123 (Dec 7, 2000)

1 - permissions are -rwxr-xr-x
2 - I created it- I recreated it again this morning with notepad ++
3 - -ash: /etc/rc.local: not found (and yes, there is an rc.local in /etc/rc.local)
4 - there is no /etc/inittab


----------



## lrhorer (Aug 31, 2003)

johnh123 said:


> 1 - permissions are -rwxr-xr-x
> 2 - I created it- I recreated it again this morning with notepad ++


That's the problem (or at least the first one). Notepad creates lines terminated with <CR><LF>. These are not valid EOL characters for a Linux text file. Do not use notepad. If you must, use a Linux - compatible editor (like Edit Plus) on your Windows machine, or else use vi (vim) or some other editor on the NAS. Vi is a little arcane for someone who has never used it.



johnh123 said:


> 3 - -ash: /etc/rc.local: not found (and yes, there is an rc.local in /etc/rc.local)


That's a result of trying to parse a line with an improper EOL character.



johnh123 said:


> 4 - there is no /etc/inittab


Strange. The init application used with most Fedora / RedHat derivatives ( and most Unix systems ) usually uses inittab to define the runlevels. I suppose the NAS only has one runlevel.


----------



## johnh123 (Dec 7, 2000)

OK, great, thanks for your help, I got it fixed. Had to use vi as even edit plus was inserting ^M at the end of every line.


----------



## lrhorer (Aug 31, 2003)

johnh123 said:


> OK, great, thanks for your help, I got it fixed. Had to use vi as even edit plus was inserting ^M at the end of every line.


In Edit Plus, you can change the file type to "Unix". Go to <Document><File Format (CR/LF)>. In any case, it is good you got it working.


----------



## johnh123 (Dec 7, 2000)

jcthorne said:


> Java on the NAS would be a problem. Would suggest using vidmgr and pytivo to effectively replace the function of streambaby from the NAS.


Just installed streambaby on the 1411+, and...it works!

Used the java package from here: http://pcloadletter.co.uk/2011/08/23/java-package-for-synology/

Now I've got to figure out how to keep it running as a 'service' so it will continue and I don't need to leave a CLI open.


----------



## noseph (Oct 13, 2003)

johnh123 said:


> Just installed streambaby on the 1411+, and...it works!
> 
> Used the java package from here: http://pcloadletter.co.uk/2011/08/23/java-package-for-synology/
> 
> Now I've got to figure out how to keep it running as a 'service' so it will continue and I don't need to leave a CLI open.


Great find, I may give a shot at installing StreamBaby on my DS409.


----------



## jcthorne (Jan 28, 2002)

Just wondering but, with the availability of vidmgr why does anyone want Streambaby installed on the NAS? Just seams we have progressed past that in features and usability.


----------



## johnh123 (Dec 7, 2000)

I think both have their place. Sometimes you may want to skip way ahead to a part that hasn't yet downloaded with vidmgr. Also, with my kids, they would fill up the tivo with stuff from vdmgr and not delete it, so i'd rather they use streambaby. Also, it starts right away vs a bit of a delay with vidmgr. Don't get me wrong, i'd rather have vidmgr than streambaby, but having both is even better.


----------



## jcthorne (Jan 28, 2002)

johnh123 said:


> I think both have their place. Sometimes you may want to skip way ahead to a part that hasn't yet downloaded with vidmgr. Also, with my kids, they would fill up the tivo with stuff from vdmgr and not delete it, so i'd rather they use streambaby. Also, it starts right away vs a bit of a delay with vidmgr. Don't get me wrong, i'd rather have vidmgr than streambaby, but having both is even better.


Thanks, good reasons all. Things I had not considered.


----------



## cassiusdrow (May 21, 2003)

hefe said:


> It may be a next step to figure out how to load servers on the unRaid system, which is Linux based...but one step at a time...still learning how all this stuff plays together!


I am running unRAID as well, and have pyTiVo, HME for Python, PyHME-Jukebox, and PyHME-Video Manager 2 running on it. Read through this thread to get started with getting Python and pyTiVo running on your unRAID server:

http://lime-technology.com/forum/index.php?topic=6571.0

Once that is done, it is relatively easy to add the other Python-based HME apps.


----------

