# Visual Proof: OTA HD vs. DirecTV HD



## TyroneShoes (Sep 6, 2004)

Well, isn't this interesting.

There has been a lot of speculation about the quality of OTA HD vs. DTV HD. I think I can end the speculation, by allowing you all to see exactly what I can see. I have uploaded some images taken from my TV using a $200 4.1 mp Sony camera in VGA mode. The images are from FOX TV network HD and are from Sunday night's bumper promo going into their prime block. These are all 720p images recorded on the same HR10-250 and output at 720p to a Sony display with 768 native resolution. Both are from stills of recordings made on one of my HR10's, and both represent the exact same frame of video.

The difference is that one image is from the FOX O&O here OTA, and the other is from the FOX O&O in LA delivered as DTV channel 89. One shot is from just before "The Simpsons" at 6 PM MST from KSAZ-DT, and the other is two hours later, from just before the identical episode of "The Simpsons" at 7 PM PDT from KTTV-DT.

Both TV stations output the ASI stream using a Terayon 5100 MPEG splicer, which keeps the file in virgin ASI format, so there is no local downconversion or reconversion at either facility. The only other difference, as far as the source is concerned, is that LA receives the signal by C-band sat from FOX's west coast HD feed, and the other station receives their MST FOX HD feed (which should be identical, the same server copy uplinked twice over different transponders at different times).

Both stations receive and process the signal in the identical manner using identical receivers, splicers, etc. This means that the only thing that can make one image look different from the other is the extra processing that DTV does to get the ASI signal up to their sat, and down to our dish antennae, including recompressing in HD-Lite, and any attendant compression artifacts or resolution artifacts that this might incur.

One image shows significantly more pixellation than the other. I will leave it to you to guess which image is OTA, and which is DTV, but I doubt any of you will guess wrong. Of course the artifacts seen are primarily because the graphics used are moving very quickly, which is difficult to compress without artifacts. That means that for still images, there will be little difference, possibly a bit less resolution. But for moving images, even if ever so briefly, the difference between OTA and DTV is like night and day, as you can plainly see.

I've not uploaded images here before, so forgive me if I F this up, but here are the first two images, to be followed by two more. Remember, this is MPEG-2 OTA vs. MPEG-2 DTV, from nearly identical sources. Once DTV uses MPEG-4, the difference in quality will do one of three things: get better, get worse, or stay the same.

Let me know what you think.


----------



## TyroneShoes (Sep 6, 2004)

And here are the other two images.

Interestingly enough, notice how the pixellation artifacts actually, and ironically, tend to mask the golden "ghost" image of the receding FOX logo over the blue background. In that one tiny aspect, more pixellation is actually better. But the visual effect as a whole is certainly better with less pixellation, as I'm sure you all would agree. Again, one is OTA, one is DTV, and every other aspect is unchanged, making this fairly unimpeachable empirical evidence that "one of these is not like the other".

Of course camera stills of static images from either DTV or OTA are razor-sharp by comparison to either shot of these quickly-moving graphics, and it is diificult to determine whether a picture taken is of reality, or of the screen of a 60" TV under those circumstances. But once things start to move quickly as seen here, the difference becomes pretty obvious.


----------



## AVPhan (Oct 20, 2004)

TyroneShoes said:


> And here are the other two images.
> 
> Interestingly enough, notice how the pixellation artifacts actually, and ironically, tend to mask the golden "ghost" image of the receding FOX logo over the blue background. In that one tiny aspect, more pixellation is actually better. But the visual effect as a whole is certainly better with less pixellation, as I'm sure you all would agree. Again, one is OTA, one is DTV, and every other aspect is unchanged, making this fairly unimpeachable empirical evidence that "one of these is not like the other".
> 
> Of course camera stills of static images from either DTV or OTA are razor-sharp by comparison to either shot of these quickly-moving graphics, and it is diificult to determine whether a picture taken is of reality, or of the screen of a 60" TV under those circumstances. But once things start to move quickly as seen here, the difference becomes pretty obvious.


Come on now....

So OTA is better ? That's what I believe but I am no expert.


----------



## TyroneShoes (Sep 6, 2004)

"Come on now..."? What...you're thinking this might be some sort of Photoshop trick, maybe? Hardly.

Me telling you that OTA is better would be me trying to influence you, me trying to put thoughts into your head. First, I have no axe to grind. Second, if what you believe is what you believe because of what you _think_ you _should_ believe (which is typically the problem with the human belief system), then we haven't really done anything significant to get at the truth here. If you just aren't buying what your eyes are telling you, then you're beyond help. But then that's OK.

My attempt was to cut through all of that by merely presenting unbiased images in as scientific a manner as possible, each of which tells its own truth in an obvious and unvarnished fashion. No expertise is required on either of our parts. Some can't handle the truth, and I get that. For the rest of you, soldier on.


----------



## vascott1 (Jan 8, 2004)

Just a suggestion. Tell us which image is which? One in each set is better than the other for sure.

Thanks


----------



## newsposter (Aug 18, 2002)

Is this an offshoot of the mpeg4 vs OTA thread? I doubt it because nowhere here did you mention mpeg4 so I'm probably wrong there. 

I hadn't thought there ever was reason to doubt mpeg2 HD DTV was worse than OTA but assume that's the real intent of the post. Makes me glad i never applied for waivers and instead spent 400 on the antenna labor. 

Now you need to get your hands on mpeg4 and post those pics!


----------



## bdlucas (Feb 15, 2004)

Thanks Tyrone, that's interesting. Any chance of getting some screen shots of more static scenes where (presumably) macroblock artifacts don't predominate, so we can get some idea of how much softer the DirecTV image might be? Ideally of course something that has lots of sharp detail...


----------



## aaronwt (Jan 31, 2002)

The difference is even more noticeable on a 1080i station.


----------



## ncsercs (May 5, 2001)

OTA is better. I'm 30 miles out and my antenna is on the roof of a 7-story building. Clear as a bell. And free.


----------



## BigTuna (Dec 9, 2003)

Shoes-

You have wayyyyyyy tooooooo much time on your hands. Here's an idea.........just watch the one that looks the best, you get both of them and only pay for one of them.

My OTA is flawless as compared to DTV. Maybe it's because I live less than 3 miles from the Hancock/Sears Tower in Chicago with a clear line of sight? No compression and no signal loss due to weather..........hmmmmm.

Your results cannot be generalized for the whole country and everyone's personal situation. If I lived 50 miles from my OTA towers, don't you think my DTV picture would look better?

What was the point of your experiment that took the better part of an evening? Am I missing something here? This isn't rocket science. Save the psychological rhetoric on why/what we believe things (I assume you don't have a degree in Psychology or any science for that matter from an accredited university). In order for this to be "science" as you put it, you have to take a sampling and do multiple experiments. Your conclusions are based on YOUR situation and ONE study. This is most moronic thing I've ever read on this forum. 

Let the flaming begin................


----------



## newsposter (Aug 18, 2002)

here's a chart of some bitrates for OTA and DTV

http://www.widemovies.com/dfwbitrate.html

I had no idea HBO and SHO were so bad though I knew they were worse than my ota.


----------



## jfischer (Oct 14, 1999)

BigTuna said:


> If I lived 50 miles from my OTA towers, don't you think my DTV picture would look better?


Nope. Signal strength has nothing to do with the quality of the picture, unlike with an analog picture. With digital broadcasts, it's either there, or it isn't. When you're on the fringe of losing the OTA signal you'll get lots of breakups, just like rain fade on the dish.

My local ABC affiliate just added a new antenna. Before, I had signal strengths in the 35s. Now I get it in the 90s. However, the picture is no better than it was before, I just don't have the occasional dropouts and picture freezes I had when it was down in the 30s.


----------



## bdlucas (Feb 15, 2004)

Tyrone, pls do us all a favor and resist the urge to compose one of your manifestos in response to BigTuna.  He's out of line, no need to respond. Just put the keyboard down, ok? Good, I knew you could do it. 

If I could get OTA I'd love to do the comparisons you are showing us myself on my own TV, but I can't, so I found your post interesting for that reason. I'm rather curious to get a sense of whether the well-reported deficiencies of DirecTV vs OTA are just artifacts, as your pictures showed, or if there's something more going on, e.g. some kind of softening of the image as a preamble to the compression process, so I'd personally be thrilled if you were to post some pictures that might shed some light on that question. In any case thanks.


----------



## BigTuna (Dec 9, 2003)

blah...........blah.......blah.........

You both sound like witch doctors claiming to have the "magic potion" that cures all diseases. 

My point was very simply that you cannot draw such grandiose conclusions after a single study. You're not a scientist and certainly have no psychology background. Not everyone here is just willing to "drink the koolaid" because some self-proclaimed expert types an email and attaches a couple of pictures.

Let Tyrone Nut Job fight his own battles, don't play mommy and beg him not to reply.


----------



## txfeinbergs (Mar 21, 2004)

Wow, a lot of..... hostility here.


----------



## Todd (Oct 7, 1999)

I usually notice pixelization problems more on OTA broadcasts than on D*.


----------



## newsposter (Aug 18, 2002)

Todd said:


> I usually notice pixelization problems more on OTA broadcasts than on D*.


Ditto. But the PQ of directvs locals is well worth putting up with pixel stuff on ota if it appears.


----------



## dagap (Dec 5, 2003)

TyroneShoes, thanks for your efforts.


----------



## Lee L (Oct 1, 2003)

MMMM, macroblocks...


At the very least, this is more reason not to want to get HD locals from D*. I can;t beleive they wont crank things down when they are running hundreds of locals.


----------



## Onazuka (Sep 22, 2001)

newsposter said:


> here's a chart of some bitrates for OTA and DTV
> 
> http://www.widemovies.com/dfwbitrate.html
> 
> I had no idea HBO and SHO were so bad though I knew they were worse than my ota.


Can someone explain to me bit rate vs resolution ? If they are broadcasting in MPEG2 or MPEG4 there is some kind of compression going on. If you have a picture that's mostly black the compression is high and a small amount of data is sent. If you have a lot of colors and action the compression is less and you have more data. So, isn't the bit rate more function of the content and not a determination of quality ?


----------



## bdlucas (Feb 15, 2004)

Onazuka said:


> So, isn't the bit rate more function of the content and not a determination of quality ?


Yes, but. The transmission medium determines a maximum possible bitrate. For example, in OTA a single channel can transimit no more than about 19 Mb/s, and on DirecTV each channel gets some fraction of the bitrate available on a given transponder. When there's a lot of motion the transmitted bitrate is kept under the limit by allowing the quality to degrade - allowing artifacts in the image and to some extent, in effect, reducing the resolution - and that's the macroblocking or pixellization you see in the images Tyrone posted.

The preceding is a simplification. I'm using the term "reducing the resolution" loosely - what's really going on is that the amount of information that can be transmitted per 16x16-pixel macroblock when there is a lot of motion is limited, so the macroblock tends to be more uniform across the macroblock than it should be, making it appear to some extent as a single uniform large pixel.

The other simplification in my explanation is that by using statistical averaging across channels instead of a fixed bitrate allocation per channel the bad effects due to moving scenes be reduced.


----------



## feldon23 (Mar 13, 2001)

BigTuna said:


> Shoes-
> 
> What was the point of your experiment that took the better part of an evening? Am I missing something here? This isn't rocket science. Save the psychological rhetoric on why/what we believe things (I assume you don't have a degree in Psychology or any science for that matter from an accredited university).
> Let the flaming begin................


Ok, here you go.

People are trying to make educated decisions about whether DirecTV is worth sticking with.

With these variables:

Picture quality on DirecTV continually falling (Discovery HD is now unimpressive, Sci-Fi looks like *ASS*.
Dumping TiVo so they can make $1 more per month per customer
10% price increase coming in March
Rupert Murdoch
Announcement of TiVo III: The Cable Dominator
you bet your high definition dollars we're considering options.

Shoes posting was very valuable for people trying to evaluate whether DirecTV is going to hamstring us with more half-rez HD like they're doing now and the answer is a resounding yes.

*It's not HD, it's DirecTV.*


----------



## newsposter (Aug 18, 2002)

Hey since we are talking about options, any ideas on where we can go for reviews of cable PQ? Of course not having comcast, i doubt i'd have many resources online, but just wondering if there was a 'cable' forum


----------



## toolkit (Jan 10, 2002)

Nicely stated Feldon.


----------



## feldon23 (Mar 13, 2001)

I already had my first negative experience with cable. Checked out Time Warner Cable Houston website. No channel lineup. No pricing information on their HD tier. No terms and conditions on their cable internet placed on their website (I need to verify if they will really block me from running an FTP, website, etc. and start sending me nasty letters about monthly bandwidth usage). Every useful bit of information requires a phone call.


----------



## newsposter (Aug 18, 2002)

That stinks. My local cable site says their d/l limits and pricing/listing of every package. Tons of detail there if you wanna read it. A bit hard to decode some of their packages, but their 10 dollar HD package tosses in all the locals plus the stuff DTV has. If the PQ is decent, would be a steal. Of course renting a moxi for 10 bucks stinks, but DTV is going that way anyway so the point is moot. Even says they have FF compensation but I thought that was a tivo patent.


----------



## feldon23 (Mar 13, 2001)

**warning: blasphemy!** Moxi is almost as good as a TiVo. I'd go for it.


----------



## Lee L (Oct 1, 2003)

feldon23 said:


> I already had my first negative experience with cable. Checked out Time Warner Cable Houston website. No channel lineup. No pricing information on their HD tier. No terms and conditions on their cable internet placed on their website (I need to verify if they will really block me from running an FTP, website, etc. and start sending me nasty letters about monthly bandwidth usage). Every useful bit of information requires a phone call.


I've noticed that about TWC here in Raleigh as well. Their prcing structure is virtually indecipherable to someone who is not a sales rep.


----------



## osultan (Dec 1, 2003)

toolkit said:


> Nicely stated Feldon.


Ditto. Thanks Tyrone.


----------



## newsposter (Aug 18, 2002)

feldon23 said:


> **warning: blasphemy!** Moxi is almost as good as a TiVo. I'd go for it.


Before laying out a few hundred for my OTA setup, i did look for info on the moxi to see if it was any good. My local cable does have great/more HD channels for virtually the same price as DTV. And though I was only swayed by one opinion not to go there, it was a moderator with personal experience that gave me the info I used to determine it wasn't for me... if you can't trust a mod, who can you trust


----------



## Fish Man (Mar 4, 2002)

Tyrone is clearly a man after my own heart. I wish I'd done this experiment myself. 

But, clearly, Tyrone is also a man with more time on his hands than me.  (The fact that my waiver applications for distant HD networks were denied by all my local channels, so I don't have the ability to compare the same channel OTA vs. DirecTV, notwithstanding.)

This is very interesting.

Thanks, Mr. Shoes for posting it!

In general terms, I agree with feldon23.

The "value gap" between DirecTV and many cable offerings is shrinking. This will vary greatly by area, obviously, since cable systems range from awesome to massive steaming piles of suck, depending on who the provider is in your area. (My cable provider, Charter, is definitely in the lower 50% in terms of "value" of their service, giving DirecTV an advantage here.)

However, the current huge unknown is what will DirecTV's MPEG4 offerings look like? Eventually, ALL their HD will be going to MPEG4. It's possible, although by no means guaranteed, that the MPEG4 HD from DirecTV will be impressive, and hold its own against any cable offering, and maybe even stand up to OTA.


----------



## SpankyInChicago (May 13, 2005)

BigTuna said:


> types an email


These are not emails.


----------



## HogarthNH (Dec 28, 2001)

Fish Man said:


> However, the current huge unknown is what will DirecTV's MPEG4 offerings look like? Eventually, ALL their HD will be going to MPEG4. It's possible, although by no means guaranteed, that the MPEG4 HD from DirecTV will be impressive, and hold its own against any cable offering, and maybe even stand up to OTA.


Every report so far is that while the MPEG4 HD looks way better than the MPEG2 HD, it's still somehwat poor compared to uncompressed, high bit rate OTA. It's not unknown at all.

(A lot of modifiers there, alas.)

H


----------



## mx6bfast (Jan 2, 2004)

Ok I'll ask, were B & C from D*?

My local FOX just went HD OTA (all 19mbs) and I also recieve the national D* feed. I could tell a difference in PQ in the first couple of minutes, OTA won. So to call this a worthless thread is stupid. 

As mentioned above, distance from the towers make no difference on the PQ.


----------



## Fish Man (Mar 4, 2002)

txfeinbergs said:


> Wow, a lot of..... hostility here.


From precisely *ONE* thread-crapping troll.

Ignore him. Better yet, put him on "ignore".


----------



## Lee L (Oct 1, 2003)

HogarthNH said:


> Every report so far is that while the MPEG4 HD looks way better than the MPEG2 HD, it's still somehwat poor compared to uncompressed, high bit rate OTA. It's not unknown at all.
> 
> (A lot of modifiers there, alas.)
> 
> H


All I can do is look at what Cable, DirecTv and Dish Network have done every time a new compression or coding scheme was introduced. THey used it to cram more channels into the same bandwidth with similar quality at first, then gradually added more and more to the point that the quality suffered. I just don;t see that changing this time around.


----------



## bdlucas (Feb 15, 2004)

Lee L said:


> All I can do is look at what Cable, DirecTv and Dish Network have done every time a new compression or coding scheme was introduced. THey used it to cram more channels into the same bandwidth with similar quality at first, then gradually added more and more to the point that the quality suffered. I just don;t see that changing this time around.


There is a counterexample - DirecTV initially rolled out with something less than MPEG-2 (I've seen it called MPEG 1.5) and then subsequently improved quality  by switching to MPEG-2 when the technology to do MPEG-2 encoding in real time was developed.

I was a subscriber at the time and the sad fact is that even with MPEG 1.5 the picture was better than the snowy crap my cable company was providing. The even sadder fact is that digital cable now has the edge in PQ and will probably continue to do so. I'll be seriously evaluating the cablecard TiVo.

But of course past performance is no guarantee of future results.


----------



## BlankMan (Mar 30, 2002)

BigTuna said:


> Shoes-
> 
> You have wayyyyyyy tooooooo much time on your hands. Here's an idea.........just watch the one that looks the best, you get both of them and only pay for one of them.
> 
> ...


Wow. Talk about the pot calling the kettle black.... This is the most moronic post I've read of late....

Nice job Tyrone and thanks.


----------



## TyroneShoes (Sep 6, 2004)

BigTuna said:


> Shoes-
> 
> You have wayyyyyyy tooooooo much time on your hands....This is most moronic thing I've ever read on this forum.
> 
> Let the flaming begin................


   

If you've been on this forum very long I think you already know I spend way too much time here, but the "experiment" took only about 5 minutes. I had come home from a SB party and before I left I wanted to make sure I got the "Simpson's" repeat, and the best way to make sure was to record both feeds. I had no intention of discovering the difference between the two, it was just a happy accident. I noticed the pixellation on one feed and compared it to the other. And I thought maybe some foks here might find the comparison interesting, so I snapped a couple quick shots. I have to admit I didn't really expect anyone to find it moronic. You are only the first to admit it, if you can believe that.



BigTuna said:


> ...
> My OTA is flawless as compared to DTV. Maybe it's because I live less than 3 miles from the Hancock/Sears Tower in Chicago with a clear line of sight? No compression and no signal loss due to weather..........hmmmmm...If I lived 50 miles from my OTA towers, don't you think my DTV picture would look better?...


 Where you are in relation to the broadcast towers is not relevant, because DT video quality at 1 mile away is identical to video quality 50 or even 100 miles away, assuming you can still receive the signal, all else being equal. Reception of ATSC with 100% uncorrupted bits is no different from reception that has only 40% uncorrupted bits, because the pictures we see are created from information that has huge amounts of error correction and error concealment using decoders that are clever enough to replicate any missing information perfectly, similar to how a connect-the-dots image yields a line drawing. Maybe _your _ free time would be better served by learing about MPEG compression as opposed to trying to get under my skin (which you've failed miserably at, BTW).

The reason (one of the big ones, anyway) we've moved to digital distribution for terrestrial TV is exactly because distance and typical atmospheric disturbances will have no affect on PQ.



BigTuna said:


> ...Your results cannot be generalized for the whole country and everyone's personal situation....


If they could not be generalized, then I have indeed wasted your time and everyone else's. But, again, if you understood enough about how DT works, you would also understand that this little exercise can indeed represent the difference between DTV reception (which is obviously the same for everyone) and OTA reception. In fact, that is the entire point. Going back to the example, the signal that is received OTA from FOX KTTV in LA and KSAZ Phoenix and in Minneapolis and Atlanta and everywhere else that FOX is carried, is virtually identical as far as PQ goes, for all receivers, due to the unique system that FOX uses which keeps the original uplinked pristine signal in the digital domain all the way to your STB.

Both images in the examples are from the same source. Both are from the ASI signal from FOX TOC, and remain the same until received, demodulated, and decoded, either by my OTA tuner, or by DTV who then reprocesses it, and sends it through their pipeline to my Sat receiver, which resides in the same PVR as the OTA tuner. That means that the only difference is what DTV does to it, and that difference will be the same here, as it is in Keokuk, Iowa and in Manhattan.

The fine point that might have swooshed over your pointed little head in all of this is that we are not talking at all here about reception issues. Reception issues do not affect PQ, only the ability to receive the signals with any amount of continued stability. If you have a weak antenna system and get breakup on DT every few minutes, that is not a PQ issue, it is a reception issue. The PQ is as perfect as it is when it leaves the FOX TOC, as long as I or you or anyone can receive it. And I have no reception issues, so that takes reception completely out of the picture.



BigTuna said:


> ...
> ... Am I missing something here? This isn't rocket science. Save the psychological rhetoric on why/what we believe things (I assume you don't have a degree in Psychology or any science for that matter from an accredited university). In order for this to be "science" as you put it, you have to take a sampling and do multiple experiments. Your conclusions are based on YOUR situation and ONE study...


 Yes, I think you might indeed be missing something, maybe a lot of things. I also think I'm as allowed as anyone else to speculate on psychological issues. The entire concept of DT is based on the psychology of the human perception system, and mindset is an important aspect of that. Oh, and my degree? Ironically enough, I was a psychology major with a philosophy minor, and graduated so before chucking it all for more technical fields such as broadcast TV, cable TV, and broadcast radio, all in engineering maintenance. If I'd become a psychologist or the equivalent, I probably would have eaten my gun long ago.

Yes, my conclusions are based on my situation...receiving DTV and receiving OTA from virtually the same source that DTV receives it from. But my situation is the same exact situation most of us fall into, because we all receive the same DTV signal, and we all (or most of us) can receive an OTA FOX signal, which will have virtually identical PQ everywhere, due to the process that FOX uses. That makes doing it in one place representative of everywhere. Anyone living in a FOX DT DMA could repeat this experiment and get the same results.

And it was not one example, it was two. And I have a third, from the tease promo of last night's "24" that shows the exact same thing, just in case anyone thinks a momentary reception glitch could have queered the original examples. I would post that, but it adds nothing new to the mix, and might be considered a waste of my time.

There.

That wasn't too "Unibomber" for you, was it, BD?


----------



## TyroneShoes (Sep 6, 2004)

OK, let me make one last point. These are extreme examples. They are only representative of individual frames that only show a difference because when there is extreme motion it is difficult to reconstruct images if they are compressed too much. The frames on either side of the examples given appear much more identical than the examples, because the bit starving is very fleeting. Most of the time the PQ will be amost indistinguishable. In a sitcom with little motion, you might never get such pixellation and the PQ would be identical, OTA vs. DTV, for these two sources 100% of the time. In a drama like "24", however, as much as a frame every few seconds might show some added impairment, while the bulk of the program would look the same from either source. Many will not even notice this. This means that maybe the difference is minimal enough to ignore for most people for at least 99% of the time. It's certainly not a reason to shun DTV, who is doing as good a job as they can under adverse circumstances right now.

My general perception is that I really can't see a difference, almost all of the time. It's just that what I saw Sunday night caught my eye, and I wanted to share that with you. I'm perfectly happy with DTV HD PQ, generally speaking.

Also, the amount of compression artifacts should actually be slightly less (assuming the same bit allocation) than if they had _not _ used HD-lite resolution, and HD-lite resolution is not a factor in the resolution of 720p images on DTV, so HD-lite can not be faulted here, only the severity of the compression. HD-lite only has a negative affect on original 1080i images, and then only as a reduction of H rez.

And I guess the cat's out of the bag...D and A are OTA, while B and C are DTV.


----------



## Wolffpack (May 28, 2003)

TS,

Thanks for the post. VERY informative. Especially the FOX broadcast info. I was not aware of that. Thanks again.

:up:


----------



## kimsan (Jan 23, 2002)

Interesting stuff.

I'm first in line to concede that the HD package channels are being downconverted to "HD-Lite". A recent DiscoveryHD recording was 1280x1088...not exactly part of the HD definition. Hint: local OTA PBA broadcasts are 1920x1080.

That said, the National feed major nets appear to be unaltered...at least in resolution. They may be cutting back on bit rate, but the frame sizes appear correct. The Superbowl and Grey's Anatomy on ABC Sunday was 1280x720.

So the big Q that remains is why does DTV downgrade the PREMIUM pay-extra-to-receive-me HD package while sill carrying the NATIONAL part-of-your-distant-net package at full res?

Do they NOT want me to pay the extra bucks?

Anyone? Anyone? Bueller?


----------



## newsposter (Aug 18, 2002)

kimsan said:


> The Superbowl and Grey's Anatomy on ABC Sunday was 1280x720.


how does one find out this stuff? web page? file size? etc


----------



## Cheezmo (Apr 26, 2004)

All 1080i channels are downrezzed to 1280x1080.
All 720p channels are left at 1280x720.

Nothing to do with hd package vs. national nets.



kimsan said:


> Interesting stuff.
> 
> I'm first in line to concede that the HD package channels are being downconverted to "HD-Lite". A recent DiscoveryHD recording was 1280x1088...not exactly part of the HD definition. Hint: local OTA PBA broadcasts are 1920x1080.
> 
> ...


----------



## Lee L (Oct 1, 2003)

bdlucas said:


> There is a counterexample - DirecTV initially rolled out with something less than MPEG-2 (I've seen it called MPEG 1.5) and then subsequently improved quality  by switching to MPEG-2 when the technology to do MPEG-2 encoding in real time was developed.
> 
> I was a subscriber at the time and the sad fact is that even with MPEG 1.5 the picture was better than the snowy crap my cable company was providing. The even sadder fact is that digital cable now has the edge in PQ and will probably continue to do so. I'll be seriously evaluating the cablecard TiVo.
> 
> But of course past performance is no guarantee of future results.


Ooo, very true. I forgot about that and while I did not have D* at the time, I remember reading about it. I do think that was in a very different time for Satellite though. The rush to cram locals in started the ball rolling on cramming in too many channels.


----------



## kimsan (Jan 23, 2002)

newsposter said:


> how does one find out this stuff? web page? file size? etc


Start with the TiVo Undergraound thread "Hacking your Series 2 DTivo just got a WHOLE lot easier...". That'll get you into the box.

From *T*here *Y*ou add Tools that can't be discussed on TCF, but are readily available on another froum which can't be named here.

Think of it as a quest


----------



## kimsan (Jan 23, 2002)

Cheezmo said:


> All 1080i channels are downrezzed to 1280x1080.
> All 720p channels are left at 1280x720.
> 
> Nothing to do with hd package vs. national nets.


Good info to know. I hadn't checked any of the national net 1080 channels yet. I know for sure the hd package channels get the horizontal squish.

I'll have to check the bit rate on national CBS vs my local sometime as well. That'd be a definite indicator of re-compression.


----------



## newsposter (Aug 18, 2002)

from reading just the first post or so, i dont think that helps with T60 or HDtivo does it?


----------



## rminsk (Jun 4, 2002)

Onazuka said:


> So, isn't the bit rate more function of the content and not a determination of quality ?


It is much much more complicated than that. mpeg uses motion compensation and discrete cosine transforms (DCT) to compress the image. Its bit stream has a combination of i-frames (no motion predicition), p-frames (forward motion prediction), and b-frames (bidirectional motion predicition). A static frame my not compress that well with the DCT because of it's frequency content. Artificial frames (computer graphics) can have very high frequency content that makes them hard to compress. Also very smooth regions are very hard to predict motion. It is counter intuitive sometimes what will compress well and what won't.


----------



## kimsan (Jan 23, 2002)

newsposter said:


> from reading just the first post or so, i dont think that helps with T60 or HDtivo does it?


It does. They added HR10-250 support around December...many pages into the thread. Go to Gunnyman's Tivo Zipper page (referenced in the first post) and you'll see what is and isn't supported.

I've got both my units Zippered and have MRV-ish functionality despite the lack of 6.2 software.


----------



## TyroneShoes (Sep 6, 2004)

rminsk said:


> ...A static frame my not compress that well with the DCT because of it's frequency content. Artificial frames (computer graphics) can have very high frequency content that makes them hard to compress. Also very smooth regions are very hard to predict motion. It is counter intuitive sometimes what will compress well and what won't.


You have hit on exactly why the examples I posted are so extreme. They represent HF graphics, with smooth regions, and were also quickly-changing video. That is a triple-threat to compression, and is probably why it caught my eye for the first time in 15 months of viewing both sources regularly (and critically), which lead to this thread.

That means that a whip-pan of Jack Bauer whirling around to shoot the gun out of a terrorists hand on a handheld camera close-up, which may have the same amount (or more) of quickly-changing video as the graphics examples I posted, will likely not show as much pixellation as those examples might. Of course there will still be more than the OTA signal might have, all else being equal, but as I said earlier, it might not matter due to being difficult to notice.

In case anyone here is still paying attention, the SD LIL version of these examples had fairly equivalent pixellation to the HD DTV version. That makes sense, as DTV probably bases the amount of compression they will tolerate on how visible the resultant video artifacting is, and the SD standard they have set for SD LIL (low as it is) is probably a milestone representing what they would accept on the HD signals.


----------



## Drestin (Jan 24, 2002)

Honestly, I can't believe there is even a debate over PQ of OTA vs DTV. Now I haven't got techonmumbo jumbo to sling nor pix to post so my evidence is purely anecdotal however:

I have been to 7 different folks places who have DTV HiDef Tivo's, as I do, and have viewed their OTA image and made A/B comparisons with the same show playing via the OTA antenna input. Back and forth we go, toggling with the Tivo remote and it's so obvious, so amazingly totally completely obvious how much better the OTA PQ is that I can't imagine there is even debate on this. Is there?

I mean... the OTA signal is uncompressed and at a higher bitrate in every case. All forms of MPEG compression are lossy. Hello? Is there even a reason to continue to debate it?

I'd rather just look into a) will MPEG-4 as sent by DTV be noticably better than the current MPEG-2 image? (Likely). b) will DTV make the mistake that since a lower bitrate MPEG-4 signal can produce a similar picture to the higher bitrate MPEG-2 stream, will they just crank up compression on MPEG-4 until it ONLY looks as "good" as the current just to save bandwidth and crank on more channels?

I suspect that the answer to b will be "yes." Instead of thinking, "Hey MPEG-4 produces a superiour image quality at the same bitrate as MPEG-2 - so, lets not change the bitrate, lets give our customers better PQ!" - they'll be ONLY thinking, "MPEG-4 at half the bitrate we currently use gives the same crappy ugly PQ we have now but allows us double the channels" and screw us by forcing reciever upgrades to receive no picture quality upgrade with the unfullfilled promise of more content.


----------



## super dave (Oct 1, 2002)

feldon23 said:


> Ok, here you go.
> 
> People are trying to make educated decisions about whether DirecTV is worth sticking with.
> 
> ...


 :up: :up: :up:

I really appreciate these threads, as my DTV commitment is up in 2 months and I really want a good HD package, and I am not partial to either DTV, Dish or CC at this point in time. I also want an HDDVR to time shift and FF commercials. DTV no longer gives me the warm fuzzies it once did, and I don't plan on a 2 year commitment with out more HD.


----------



## feldon23 (Mar 13, 2001)

Drestin said:


> It's so obvious, so amazingly totally completely obvious how much better the OTA PQ is (vs DirecTV) that I can't imagine there is even debate on this. Is there?
> 
> I mean... the OTA signal is uncompressed and at a higher bitrate in every case. All forms of MPEG compression are lossy. Hello? Is there even a reason to continue to debate it?


Although you alluded to the correct information later in your post, I just want to make it clear, all HD signals are compressed. It's just a question of how much.

If you are fortunate enough to have local channels that each have ONE full bandwidth HD channel of 19.2Mbps, then you are probably seeing the best possible picture. However in many markets, there are useless subchannels added for news, a radar loop, or other nonsense which not only look terrible but rob the main HD channel of just enough bandwidth to introduce picture quality issues.

The 2002 Olympics on NBC were almost unbearable here in Houston because they not only had a digital copy of their analog broadcast on one subchannel (because Kingwood Cable were too cheap to run fiber) but also a radar sweep on another subchannel. The main HD channel was a joke compared to the national HDNet feed.

The signal arriving at your local channel from the network feed is typically 45Mbps. They then add commercials, graphics, logos, etc. and recompress the signal to 19.2Mbps (or less in the case of subchannels).

What DirecTV does is take that 19.2Mbps signal (which is already teetering on the edge of visible compression artifacts) and recompress it to MPEG-4. No matter what, when you recompress an image which is already at that quality 'line in the sand', you're going to push it over the edge.

If DirecTV were receiving the original 45Mbps feed and could broadcast that, then they could get away with a very small bandwidth MPEG-4 channel and it would still look very good. But since they are already starting with a compromised signal -- garbage in, garbage out.

Even though I think cable are sloths, I am seriously considering it because of the TiVo series III and being able to fold DirecTV (which just got more expensive), DSL, and phone line into one service and cable just passes what they are given.


----------



## newsposter (Aug 18, 2002)

feldon23 said:


> all HD signals are compressed. It's just a question of how much.
> 
> If you are fortunate enough to have local channels that each have ONE full bandwidth HD channel of 19.2Mbps, then you are probably seeing the best possible picture. However in many markets, there are useless subchannels added for news, a radar loop, or other nonsense which not only look terrible but rob the main HD channel of just enough bandwidth to introduce picture quality issues.
> 
> ...


I vote that for a sticky. It was explained to me in a manner that I haven't seen before

Ok if a network feed is 45, I have a question. Are the network feeds still in the clear on C band and would you get that 45 over that big dish onto your HDTV? I haven't had Cband since the early 90s but recall seeing tons of cool free stuff there, including knowing who won every jeopardy game that week since they were all fed down early monday AM in the clear  ---shoulda made some money there


----------



## feldon23 (Mar 13, 2001)

Most of BUD (Big Ugly Dish or C-Band) is encrypted these days. That topic is trudging quite a bit out of my area of expertise. My partner and I have a pool and a BUD in the back yard. We're thinking of turning the dish into an ersatz pool heater.


----------



## tivohaydon (Mar 24, 2001)

I'm really getting the impression that DirecTV is taking HD content and compressing it down to the resolution and quality that their NTSC channels *should* be at. (But obviously isn't on DirecTV.)

As has been pointed out, even with MPEG4 stations on DirecTV will never (mathematically can't) be superior to OTA locals simply due to the recompression that's performed. At best it would be equal and that would only be if DirecTV passed the OTA bits directly which they'll never do.

DirecTV's "digital quality" does not mean "quality digital".


----------



## feldon23 (Mar 13, 2001)

If they set the MPEG-4 high enough, it will be visually equal to the original MPEG-2 feed, but that wouldn't let them put 50 DMAs even if they do have two huge birds.


----------



## newsposter (Aug 18, 2002)

This is one area in which the solution truly is more more more. What's the limit on number of satellites we can put into orbit?  One bird per network at 1000mbps


----------



## SpankyInChicago (May 13, 2005)

feldon23 said:


> The signal arriving at your local channel from the network feed is typically 45Mbps. They then add commercials, graphics, logos, etc. and recompress the signal to 19.2Mbps (or less in the case of subchannels).


Not sure where you arrive at the 45Mbps number. I assume you have some inside knowledge of the transmission mechanisms from the network to the local station, which I assume is over sat.

But if 45Mbps is the number then even that is an already largely compressed signal. Full 1080i uncompressed raw HD signal is about 1.2Gbps. 1920 x 1080 x 10 x 60 = 1244160000.

For those with a networking background, the uncompressed rate of an HD signal is equal to the bandwidth of an OC24.


----------



## TyroneShoes (Sep 6, 2004)

And 45 Mb does not mean the resulting ".1" channel is backhauled at that rate. The bit stream is a composite stream, of which the eventual ".1" channel you see is only a part. For instance, FOX sends three 73 Mb streams and a 55 Mb stream currently. One of the 73 Mb streams carries the HD PDT prime feed for LA, but that stream also carries the SD PDT prime feed, iPump news feed, PSIP, tons of null bits, and whatever else they might throw in on a moment's notice, all multiplexed together. If the local station limits the bit rate of their ".1" channel (which they parse out of that 73 Mb stream) to 12.5 Mb, it will still dynamically swing between 12.5 and 8.5, all during prime.

IOW, the bottleneck is not the SMPTE310 formatting to 19.38 done at the station, it is at FOX TOC. You could receive the 73 Mb stream directly and parse out the main HD feed yourself, assuming you had the equipment available, and you would still see the identical PQ you would see while receiving OTA from the local station.


----------



## chudel (May 10, 2005)

That's awesome that you can get "more HD channels for virtually the same price as DTV". I have the DirectTivo (non-HD) with a boring, but serviceable lineup for under $50. Time Warner in Charlotte -- according to their confusing website -- can't let me in the door for under $65/mo!!

I cannot wait for an OTA HD Tivo and then both megalopolies get a boot in their arse.


----------



## kimsan (Jan 23, 2002)

Okay, some real world numbers to go along with the excellent explanations given so far.

OTA I've got two sources. PBS (1 HD and 4 SD digital channels) and CBS (1 HD and 1 UPN SD at a framerate TiVo can't handle).

PBS - 1920x1080 13 Mbps video 384kbps audio
CBS - 1920x1080 13 Mbps video 384kbps audio

DirecTV provides...well...you know exactly how many. They're either horizontally-squeezed 1080i or 720p (FOX and ABC)

1080i - 1280x1088 6.5Mbps video 384kbps audio
720p - 1280x720 6.5Mbps video 384kbps audio

For those of us with 720p DLP sets, the horizontal squeeze is pretty much irrelevant. The signal arriving at the set is going to be turned into 1280x720 in all cases before it hits the screen. DirecTV has professional gear that *should* exceed that of our sets which means they're actually doing us a favor.  Non-720p set owners have every right to complain as they're being denied 640 line of resolution they *could* use.  

In all cases, that 6.5 Mbps compression cap means DirecTV is using twice the amount of compression my boonie local stations are. The signals *must* be more prone to compression artifacting/pixellation/macroblocking etc when the programming needs to exceed that variable cap. They can't help it.

Sometimes you can see it, sometime you can't. DirecTV is betting that even when the consumer *can* see it, they won't know the difference or the cause or who to blame. It's all about corporate thinking. "Let'em have more HD channels and get used to the PQ hit. Lower the bar."

Those with alternative HD sources may think about bailing. I have my two OTA channels. No cable available. Dish?...I think not. DirecTV still wins for the same reason I bought my first box in '96...they're the best choice available. For me anyway.


----------



## Wolffpack (May 28, 2003)

Are you measuring the DTV Mbps or is it based on another publication? I've always been curious as to how one can measure DTV's signal.


----------



## kimsan (Jan 23, 2002)

Wolffpack said:


> Are you measuring the DTV Mbps or is it based on another publication? I've always been curious as to how one can measure DTV's signal.


These are my in-house findings based on 5 minute recorded samples taken last night between 5 and 6 PM. Some of the tools and techniques used are a banned subject here.

Womble MPEG Video Wizard let's me see the mpeg video and audio numbers. Nero Showtime or PowerDVD will display the instantaneous rates during playback.

Nice tools that I get a lot of use out of.


----------



## Drestin (Jan 24, 2002)

SpankyInChicago said:


> But if 45Mbps is the number then even that is an already largely compressed signal. Full 1080i uncompressed raw HD signal is about 1.2Gbps. 1920 x 1080 x 10 x 60 = 1244160000.


You did x10 because it's 32 bit color, I believe that is right. However, you also did x60. _Correct me if I'm wrong_ but I thought 1080i was 60 fields per second INTERLACED and therefore should be x30 frames per second transmitted.

1920x1080x10x30= 622080000 bits per second or 77760000 bytes per second or 74 MBps, just under a T3C rate.



Feldon23 said:


> Although you alluded to the correct information later in your post, I just want to make it clear, all HD signals are compressed.


You are correct, I didn't proofread myself. Yes, all ATSC digital signals are compressed using MPEG-2. DTV intends to recompress that MPEG-2 signal with MPEG-4; it's a simple thing to realize that it is _impossible_ to ever achieve the same quality as the original MPEG-2 signal becauase MPEG-4 is also a lossy compression format.

MPEG-4 allows DTV to put more channels out in the same bandwidth - it cannot produce better PQ. But, let me clarify before I get jumped on with "I've seen MPEG-4 and it looks better." I don't disbelieve you but here is something to consider.

DTV gets this MPEG-2 signal from it's source (HBO for example) and then recompresses it using MPEG-2 again (their current method); MPEG-2 isn't quite a good of a compressor as MPEG-4 and the results are visible onscreen. Now, using their new method they recompress that same original MPEG-2 signal using MPEG-4 - assuming they use exactly the same bandwidth as before the resulting MPEG-4 PQ will be better because MPEG-4 does produce better PQ at a given bandwidth as MPEG-2.

But I believe that DTV won't pay us respect and do that. I think that will recompress with MPEG-4 AND lower the bandwidth so that the resulting PQ will only be the same as it was before - but that they will have twice as many channels on a bird as before

Let me give you an analogy; When Microsoft came out with their Windows Media Audio format (WMA) they accurately claimed that it could produce the same results in half the disk space as MP3. What they meant was: A 64 kbps WMA file had the same audio quality as a 128 kbps MP3 - therefore, the same music took half the disk space using WMA as it did using MP3.

NOW: Take and encode the same music source using 128 kbps WMA and 128 kbps MP3 encoders and the WMA version will sound much better and will be the same size file as the MP3 version. So, same amount of disk space but better quality. Doesn't that sound great? However, if you are trying to fit more songs onto a given device what are you going to do? You'll likely say: Hey, I can fit twice as many songs at the same quality as before (mp3) by moving to half-bandwidth WMA.

And that, folks, is what I think DTV is going to do. Instead of thinking: Hey, using MPEG-4 at the same bandwidth as before I can provide a better quality picture! I'll reduce the bandwidth and get twice as many channels but only provide the previous quality picture (which many agree is substandard).

(have to head out the door so no proofreading again - sorry for any errors!)


----------



## Drestin (Jan 24, 2002)

On my WMA/MP3 topic, one quick thing more, another analogy if you will.

Some folks thought; Hey, I know, I'll convert my MP3 collections to WMA and save space! They did and immediately (and this is the tragic part, in my mind) accurately reported; "Oh man, this is terrible. The WMA recompressions sound much worse than MP3." Then they wrongly concluded "WMA sucks, I'll stick with MP3." and that, kiddies, is why we are still stuck with the inferior mp3 format as a defacto standard. WMA produces a superior sound quality at the same bitrate as mp3. It also has better meta data handling, supports variable bit rates and even has a loss-less format.

My point though (not to try to promote MS here) is that when you take and recompress an already compressed signal you are taking something acceptable and making it less acceptable. MPEG-2 recompressed with MPEG-2 again is a pretty nasty thing. Recompressing with MPEG-4 isn't as bad, by half you could say, but it can't ever be as good as the original.

SO - back to my original post (smile); OTA with it's higher bandwith and no _re_compression is always going to kick DTVs recompressed butt.

Now that produces a question: Does cable recompress their HiDef feeds to us? Specifically, for my own example, does Comcast recompress the MPEG-2 signal it receives from HBO or Fox or whomever?


----------



## feldon23 (Mar 13, 2001)

I'm sure the locals are bad, but I was astounded to see bad compression on the Discovery HD Theater the last time I visited Tweeter. I mean what the hell is all this satellite stuff for if it's worse than cable?


----------



## kimsan (Jan 23, 2002)

feldon23 said:


> I mean what the hell is all this satellite stuff for if it's worse than cable?


In my case, it IS available and cable is not. I'd prefer better quality, but ANY quality wins out over non availability. DirecTV wins be default.


----------



## Drestin (Jan 24, 2002)

kimsan said:


> In my case, it IS available and cable is not. I'd prefer better quality, but ANY quality wins out over non availability. DirecTV wins be default.


I think that is exactly the case anymore. There is nothing DTV offers better than most Cable companies do, and in many cases cable is better (where better=more ___ or higher ____ or both). Add to it that your cable company is probably feeding you 3-8 Mbs low latency internet and that supports an Internet phone solution, why would you pay for sat anymore?

Basically - I find the Tivo interface superior to all the others. That is the sole reason I have a Sat dish on the side of my house today. The day Comcast has a Tivo HiDef DVR available in my area is the day my DTV gear goes up on eBay and the dish comes down. Customer retention my butt. What will they do, promise me that maybe one day I might get half as many local channels at lower quailty? HiDef TNT? Where is Cinemax and TMC HiDef? I'll do my best Cartman imitation of, to paraphrase, "Screw you guys, I'm going to cable!"


----------



## jfischer (Oct 14, 1999)

Drestin said:


> And that, folks, is what I think DTV is going to do. Instead of thinking: Hey, using MPEG-4 at the same bandwidth as before I can provide a better quality picture! I'll reduce the bandwidth and get twice as many channels but only provide the previous quality picture (which many agree is substandard).


Sadly, I'm 100% certain that's what they'll do. I don't expect any better picture quality when they go to MPEG-4, only twice as many channels as before.


----------



## newsposter (Aug 18, 2002)

feldon23 said:


> I'm sure the locals are bad, but I was astounded to see bad compression on the Discovery HD Theater the last time I visited Tweeter. I mean what the hell is all this satellite stuff for if it's worse than cable?


That's the shame of it. When i got dtv, I kept my local cable for one month. Did A/B comparisons. DTV kicked cable's butt. I wish they did 30 day trials of cable for free, i'd almost have done it a month ago to see how far they came in 3 years


----------



## TyroneShoes (Sep 6, 2004)

Drestin said:


> ...when you take and recompress an already compressed signal you are taking something acceptable and making it less acceptable. MPEG-2 recompressed with MPEG-2 again is a pretty nasty thing. Recompressing with MPEG-4 isn't as bad, by half you could say, but it can't ever be as good as the original.
> 
> SO - back to my original post (smile); OTA with it's higher bandwith and no _re_compression is always going to kick DTVs recompressed butt...


Your general conclusions are somewhat reflective of reality, but the reasoning used to get there might not support the conclusions.

MPEG-4 is really MPEG-2 with refinements. The refinements allow either increased compression with the same artifacts, or equivalent compression with fewer artifacts. But either way you look at it, less of the original information gets encoded with MPEG-4, in practice. So MPEG-4 can't be considered "better" than MPEG-2 without qualifying in what manner it might be better, and that will depend on the application. Any gain using MPEG-4 will imply tradeoffs that might not be needed with MPEG-2. But there could be formulae that make sense for DBS. DTV sure thinks so, anyway. I think their choice to go with it is a smart one. Bold, maybe even, or maybe just inevitable.

But, if you don't change the algorithm or manipulate the data somehow in between, you can chain the same MPEG-2 or the same MPEG-4 algorithm over and over again all day long and see no additional impairments due to decompression/recompression. Most people don't realize that the analog signals they see on their SD TVs have been converted many, many times back and forth from D to A and back, and this conversion doesn't reall add much degradation. Chaining the same algorithm many times, decompression/recompressing many times, none of that really has to have a price if done properly.

But, if you chain different compression algorithms together, either two different profiles or levels of MPEG-2, or follow a MPEG-2 compression with a MPEG-4 compression (which is what DTV will be unable to avoid for HD LIL) there is unavoidable aggregate artifacting. It might not be that noticeable, but it means at the very least that they can't surpass or even match OTA PQ. See post #1 for the evidence of what DTV has done just to the MPEG-2 signals.

Will it be noticeable? Again, that depends upon the application. If you have lots of bandwidth to work with, probably not. If you don't, it probably will be noticeable. DTV has the disadvantage of not enough bandwidth, currently. The move to Ka might alleviate that, but not until they are fully converted to Ka. So their use of MPEG-4 is a stopgap, but might pay off for them down the road. In 5 years they may have enough bandwidth to give us those 1500 HD channels, and with better PQ than either now or shortly after Ka/MPEG-4 gets implemented (which for some of us is right now). Its an arms race for cable and DBS. The differences one to the other will continue to sway in one or the other's favor for some time. Right now, DTV has fallen behind, which to me seems like an idiotic time to jettison their one significant advantage...Tivo.



Drestin said:


> ...Now that produces a question: Does cable recompress their HiDef feeds to us? Specifically, for my own example, does Comcast recompress the MPEG-2 signal it receives from HBO or Fox or whomever?


Cable, which used to have a liability because they were analog while DBS was digital, has flipped the tables. They now have the advantage of more bandwidth to work with.

Typically, what cable does is convert the 8VSB signal (or sometimes the SMPTE310 signal fibered to them directly from a TV station) to QAM. What that means is demodulating the 8VSB signal to a baseband data stream, and remodulating that using QAM modulation. The advantage of that is that the digital file itself does not get converted to analog and back, and does not get decompressed or recompressed, meaning that the original PQ survives the process completely intact.

Once they have the file in modulated QAM format, the task is to distribute it with enough bandwidth to prevent bit starving, and proper demodulation by the QAM demodulator and MPEG-2 decoder in the STB. Since they typically have bandwidth to spare, this is a piece of cake. HD over cable is probably indistinguishable from OTA, while HD over DTV has been proven to be inferior to either, right in this thread. This makes the choice of MPEG-4 by DBS less of a bold move, and more of a desperate one.


----------



## newsposter (Aug 18, 2002)

kimsan said:


> For those of us with 720p DLP sets, the horizontal squeeze is pretty much irrelevant. .


If you have a minute, (all these numbers make my head spin to be honest) could someone please tell me if i'm at an advantage or disadvantage (over you 720'ers) having a CRT RPTV at 1080 since 99% of my HD viewing is OTA? (i do catch a sho/hbo movie every once in a while and it's not too bad but you can tell the difference from ota.)

i keep tivo set at 1080i and do notice that even SD is much better than at 480i


----------



## Drestin (Jan 24, 2002)

TyroneShoes said:


> MPEG-4 is really MPEG-2 with refinements. [...] So MPEG-4 can't be considered "better" than MPEG-2 without qualifying in what manner it might be better,


"MPEG-4 is better" means that it is more effective at doing the same job as MPEG-2, Compressing video streams. MPEG-4 demonstrates higher PQ than MPEG-2 at the same bit rate. That is "better." "More" would be MPEG-4 uses less bandwidth to produce the same PQ as MPEG-2.



TyroneShoes said:


> But, if you don't change the algorithm or manipulate the data somehow in between, you can chain the same MPEG-2 or the same MPEG-4 algorithm over and over again all day long and see no additional impairments due to decompression/recompression.


Not accurate, let me demonstrate: Take an original video source. Encode it/decode it and capture the output as #1. Take that output, encode it and decode it again (exactly as the first time) and capture the output as #2. In all cases, for lossy compression (like MPEG), #2 will be different than #1. By how much is a matter of bandwidth and some particulars of the CODEC used. Even at the highest (unlimited) bandwidth available loss occurs. MPEG is lossy, always. It may be effectively insignifiant above certain levels but it's still there. At the end of the day you will see additional impairments.



TyroneShoes said:


> Most people don't realize that the analog signals they see on their SD TVs have been converted many, many times back and forth from D to A and back, and this conversion doesn't reall add much degradation. Chaining the same algorithm many times, decompression/recompressing many times, none of that really has to have a price if done properly.


Done at, basically, umlimited bit rates this produces a result with so minimal a loss in PQ that you could consider it insignificant .. but there is loss, every time and it does add up. OTA is partially better because it has far fewer conversions from original source to your TV, cable has more but less than Satellite.



TyroneShoes said:


> HD over cable is probably indistinguishable from OTA, while HD over DTV has been proven to be inferior to either


New thought comes to me; if HD over cable achieves some of this by having more bandwidth available, and a Tivo is simply recording the MPEG stream. Doesn't this mean that the Tivo will need greater speed across the IDE bus - given how poorly the current HiDef DVR performs capturing existing lower bandwidth streams, how will it keep up at cable speeds?


----------



## bdlucas (Feb 15, 2004)

Drestin said:


> New thought comes to me; if HD over cable achieves some of this by having more bandwidth available, and a Tivo is simply recording the MPEG stream. Doesn't this mean that the Tivo will need greater speed across the IDE bus - given how poorly the current HiDef DVR performs capturing existing lower bandwidth streams, how will it keep up at cable speeds?


The cable stream is the same as the OTA stream, and the HR10-250 is designed to keep up with OTA, so no problem there.


----------



## Drestin (Jan 24, 2002)

bdlucas said:


> The cable stream is the same as the OTA stream, and the HR10-250 is designed to keep up with OTA, so no problem there.


You beat me to it, I realized this as I walked upstairs and did some season pass reprogramming and noticed that the system was THE slowests if I was tuned to OTA HiDef (DTV HiDef next then SD last, of course)


----------



## TyroneShoes (Sep 6, 2004)

Drestin said:


> "MPEG-4 is better" means that it is more effective at doing the same job as MPEG-2, Compressing video streams. MPEG-4 demonstrates higher PQ than MPEG-2 at the same bit rate. That is "better."...


You can certainly call it better if it makes you feel better, but that does not in any way make it "better" if you qualify "better" as in better PQ for the viewer. "MPEG-4 is better" is a very oversimplified statement, which serves more to confuse people than to enlighten them.

Is unleaded gas better than leaded gas? Sure it is, if you qualify that to how it affects the environment. Will it give you better octane performance or clean your injectors "better"? Maybe not. In that way, then, it is not "better". Again, you have to qualify an explanation that is too general or simplistic for it to be of any value. Since we are speaking here in this forum about what we see at the end of the line, "better", for MPEG4, would have to be qualified in that respect, because that is the discussion at hand, and what people want to apply "better or worse" to. And from that respect, it is not automatically "better".



Drestin said:


> ...let me demonstrate: Take an original video source. Encode it/decode it and capture the output as #1. Take that output, encode it and decode it again (exactly as the first time) and capture the output as #2. In all cases, for lossy compression (like MPEG), #2 will be different than #1. By how much is a matter of bandwidth and some particulars of the CODEC used. Even at the highest (unlimited) bandwidth available loss occurs. MPEG is lossy, always. It may be effectively insignifiant above certain levels but it's still there. At the end of the day you will see additional impairments...there is loss, every time and it does add up...


I am sorry, but this is just absolutely untrue. It sounds reasonable, until you examine exactly why chained algorithms can either add additional impairments or can instead prevent them. Here's why:

During digitization, sampled values that exist on a scale that can have an infinite number of values in the analog domain, such as luminance level, are quantized, or forced to fixed steps, changed in value from the original value to a value that is close to the original value, but still quantifiably inaccurate. This means that each value that would normally fall in between two fixed steps is represented by the coefficient that represents the step just above or just below that value. None of the sampled values is represented accurately, unless they happen by coincidence to fall exactly on a step that is the same, which is unlikely.

If you take those values and compress them, some coefficients are kept, some are discarded. When you decompress them missing values are estimated from the information that remains. The luminance values for the majority of samples are not accurate, but accurate enough. That represents the inaccuracy of first-gen digitization and compression.

If you repeat the process for a second generation using the same digitizing algorithm and without manipulating the coefficients in the mean time, the values are already quantized or would be quantized to the exact same values, so no additional rounding errors are incurred. And if you use the same compression and decompression algorithm, the information available at the first decode is exactly the same as that available at the second decode, and no new estimates for missing information have to be made that weren't made the first time around. IOW, the original digitization and compression does the original damage to the accuracy of the image, and repeating the process in the exact same manner doesn't necessarily do any further damage.

That means that for any number of identical chained algorithms, the inaccuracy does not compound due to aggregate concatenation. If you don't change the algorithms or manipulate the coefficients in other ways, you can do this theoretically an infinite number of times without incurring added impairments.

Bottom line, chained, identical A-to-D and D-to-A and MPEG compression algorithms do not increase impairments, so assuming that there will be further impairments can quite often be a flawed assumption.

If you change the digitizing algorithm (for instance, different quantization levels, different or rescaled pixel structure, etc.), all bets are off. Now, resampling and/or requantizing will add concatenation, compounding the inaccuracy, and increasing rounding errors which causes further impairments for successive generations. If you change the compression algorithm (for instance, different decisions about what information is preserved and what is discarded), same thing. For both processes, new, inaccurate math is involved for successive generations, where it wasn't in the earlier example, which uses the same inaccurate math as the first gen used. Using the same inaccurate math does not compound the problem, but using new inaccurate math on top of that does, resulting in even greater inaccuracy, and therefore more artifacts.

Bottom line, when chaining digitization or compression algorithms, you can prevent further impairments if done properly. But if you don't, you absolutely wont. That is the very simple physics of it. Math, actually.

And the reason MPEG-2 from DTV is worse than MPEG-2 from OTA is not because of the quantity of generations being one more for DTV, it is due mostly to the severity of the final compression stage that DTV uses. OTA squeezes things down to where artifacts are just visible, or just invisible, as the case may be, and they are cutting a very fine line. Going from 1.5 Gbps to 12 Mbps is just about all the information reduction the signal can take before impairments become noticeable. DTV then takes that signal and squeezes it further while chaining dissimilar algorithms, to the point where impairments eventually become somewhat more visible for most critical viewers.


Drestin said:


> ...HD over cable achieves some of this by having more bandwidth available, and a Tivo is simply recording the MPEG stream. Doesn't this mean that the Tivo will need greater speed across the IDE bus - given how poorly the current HiDef DVR performs capturing existing lower bandwidth streams, how will it keep up at cable speeds?


 The throughput of modern IDE drives is far greater than needed. This can be demonstrated by the HR10-250, which can easily record and/or play back three HD streams simultaneously using a single 5400 RPM drive (many PVRs in fact use only 4200 RPM drives). I don't think it does that "poorly". In fact, it seems to do it perfectly almost every time. Slowly, yes, but that is not due to the read/write speed of the drives, it is due to the overhead of the system software, which could be (yet woefully has not been)sped up dramatically with a software up rev, as the 6.2 up rev dramatically improved performance for the SD Tivos.


----------



## TyroneShoes (Sep 6, 2004)

newsposter said:


> If you have a minute, (all these numbers make my head spin to be honest) could someone please tell me if i'm at an advantage or disadvantage (over you 720'ers) having a CRT RPTV at 1080 since 99% of my HD viewing is OTA? (i do catch a sho/hbo movie every once in a while and it's not too bad but you can tell the difference from ota.)
> 
> i keep tivo set at 1080i and do notice that even SD is much better than at 480i


If you limit the discussion to the difference between displays with 720 native rez vs. sets with 1080 native rez (all else being equal), DTV provides a 1080i signal that has effectively the same H rez as a 720p display. That means that in the H rez, 1080i signals will have slightly less _capacity for_ resolution on a 1080 display than OTA 1080 signals will. But only if the original image takes advantage of the extreme limits of that 1080 rez, will it make a visible difference.

So it may be minor, or even not noticeable, but it could in some instances be an advantage that OTA has over DTV for 1080p set owners, and to a lesser extent for 768p set owners. Or in many cases, it might not really matter.

That is the "other" discussion, however (resolution). This thread has primarily been about the difference in quantity of compression-based artifacts, DTV vs. OTA.


----------



## rminsk (Jun 4, 2002)

TyroneShoes said:


> I am sorry, but this is just absolutely untrue. It sounds reasonable, until you examine exactly why chained algorithms can either add additional impairments or can instead prevent them.


Your argument could be true for a static image that is just using DCT compression but MPEG does motion estimation also. The P-frames and B-frames motion estimation could be significantly different when recompressing the signal since the signal is different from the original source. Also, when recompressing the signal the I, P, B frames may not line up. These things could cause significantly more quantizing in the signal. That is the very simple physics of it...

I do agree that the major source of the problem is the lower bandwith signal.


----------



## kimsan (Jan 23, 2002)

newsposter said:


> If you have a minute, (all these numbers make my head spin to be honest) could someone please tell me if i'm at an advantage or disadvantage (over you 720'ers) having a CRT RPTV at 1080 since 99% of my HD viewing is OTA? (i do catch a sho/hbo movie every once in a while and it's not too bad but you can tell the difference from ota.)
> 
> i keep tivo set at 1080i and do notice that even SD is much better than at 480i


The scaler in the HD-TiVo is right on par with the one in my TV in my experience. I leave mine set for 720 output and use DVI for almost all programming. For letterboxed SD I sometimes switch to 480i SVideo and use Zoom2 on the set to fill the screen. The price for that full screen is lower sharpness than the postagestamp (letterbox+pillars) I'd get by sticking to 720 DVI.

I'd *imagine* the effect will be similar with a 1080i CRT, but I'm not certain as CRTs aren't bound to an absolute native resolution in the way fixed panel sets are. I'm just not certain that the 720p signals require scaling with a CRT. Someone here will no doubt be able to answer that with certainty.

OTA CBS and NBC are 1080i, Fox and ABC are 720p and the TiVo records them as is. Certainly you should pass the former as 1080i. Try 720 and 1080 for the latter and see if it makes any difference to your eyes.

That's what counts in the long run.


----------



## newsposter (Aug 18, 2002)

kimsan said:


> I'd *imagine* the effect will be similar with a 1080i CRT, but I'm not certain as CRTs aren't bound to an absolute native resolution in the way fixed panel sets are.
> 
> OTA CBS and NBC are 1080i, Fox and ABC are 720p and the TiVo records them as is. Certainly you should pass the former as 1080i. Try 720 and 1080 for the latter and see if it makes any difference to your eyes.
> 
> That's what counts in the long run.


That's one of the reasons i got the crt. I didn't want a 'fixed' resolution per se. Even if my ration wasn't correct, I still think I got a good bang for the buck. I do try 720 vs 1080 during some football games and dont notice a difference. Though i can't recall which networks I was watching at the time.


----------



## bdlucas (Feb 15, 2004)

TyroneShoes said:


> Slowly, yes, but that is not due to the read/write speed of the drives, it is due to the overhead of the system software, which could be (yet woefully has not been)sped up dramatically with a software up rev, as the 6.2 up rev dramatically improved performance for the SD Tivos.


I think you're right, depending on what you mean by "overhead of the system software". I think it's been reasonably demonstrated that the disk in some way plays a role in the slowness of the UI - the speed of the UI changes depending on whether the channels that are being recorded and/or played back are HD or SD. This makes sense when you consider that the UI stores data such as season passes on the hard drive and so has to compete with the video streams for access to that data. The speedup in 6.2 could be due in part to more efficient use of the disk for things like season passes that the UI needs to do its job.


----------



## Drestin (Jan 24, 2002)

TyroneShoes said:


> You can certainly call it better if it makes you feel better, but that does not in any way make it "better" if you qualify "better" as in better PQ for the viewer. "MPEG-4 is better" is a very oversimplified statement, which serves more to confuse people than to enlighten them.


Quite honestly, I didn't expect to over analyze the various ways that the word "better" could be applied and what they could mean. I think that for the purposes of what I consider a CODEC for, "better" means doing those purposes either with a higher resulting PQ or using less bandwidth to achieve a given PQ.

TyroneShoes, How else would you qualify the use of MPEG? It's there to compress a video stream. You have to pick your goal, then; Do you want PQ or are you concerned over bandwidth. For PQ the use of "better" would apply as in: "Does the picture have less artifacts, does it look clearer, does it have good color and saturation, etc etc - for a given bandwidth. For Bandwidth, can it produce the same PQ as what you are comparing it to using less bandwidth; if it does then that is another use of better.

MPEG-4 can achieve both goals over MPEG-2. It can either maintain the same PQ as MPEG-2 using less bandwidth OR using the same bandwidth produce better PQ. In my book that means that MPEG-4 is better. That's how I was using the word. But -- I think that that should have been clear.

Perhaps, since you don't seem approve of my use of "better" applied to MPEG-4 you'd care to tell us why you do not think MPEG-4 is "better" than MPEG-2?



TyroneShoes said:


> I am sorry, but this is just absolutely untrue. It sounds reasonable, until you examine exactly why chained algorithms can either add additional impairments or can instead prevent them. Here's why:


I cut out quite a bit of your explanations that sound technical enough so I not lose our readers with a lenghty quote otherwise summarized above. You say; if I chain identical compression schemes together I won't lost quality beyond what happens in the first pass.

OK. So... off to my handy dandy video tools I go. I take original source material and encode it using my handy dandy industry standard MPEG-2 tool. I pick some values, shouldn't matter which but I chose a fixed definition "High Quality" set of parameters my program uses so I KNOW I'll always use the same values.
Encode.
Decode.

OK, I have a resultant file. Got it.
Now, take this file and using the EXACT SAME EVERYTHING I:
Encode.
Decode.

What I have now is a smaller file... hmmm... but it should have produced the same thing. No further reduction in quality. I play the 2nd file back and it is slightly lower in PQ to my eye. But perhaps I'm imagining things SO:
Encode.
Decode.

File got slightly smaller, but playback is noticably worse.

Now, perhaps I am oversimplifying things and you'll correct me with a long technical explanation but maybe you can give it to me in easy terms even someone like me can understand.

You said:


TyroneShoes said:


> Bottom line, chained, identical [...] MPEG compression algorithms do not increase impairments,


But here I am, staring at chained, identical MPEG compression algorithm runs and I clearly have increased inmpairments. How do I reconcile what I can clearly see with what you declare?



TyroneShoes said:


> And the reason MPEG-2 from DTV is worse than MPEG-2 from OTA is not because of the quantity of generations being one more for DTV, it is due mostly to the severity of the final compression stage that DTV uses.


We don't disagree here. MPEG-2 is MPEG-2, no matter who is using it. It's all about how much bandwidth you allow it to use (yes yea, minus a few other parameters, I'm trying not to get overly obsessed with details). DTV is over compressing to fit within the limits they have.

The point of my thread dealt with how DTV will use MPEG-4. Given that MPEG-2 *COULD* be used to deliver higher PQ for the same bandwidth DTV struggles with today is an option - but one that I don't believe they will take. I think they'll use MPEG-4 to further reduce bandwidth consumptions and take advantage of the fact then when using MPEG-4 to recompress at a given bandwidth it can produce the same PQ we have now. And that IS possible.



TyroneShoes said:


> The throughput of modern IDE drives is far greater than needed. This can be demonstrated by the HR10-250, which can easily record and/or play back three HD streams simultaneously using a single 5400 RPM drive [...]. Slowly, yes, but that is not due to the read/write speed of the drives, it is due to the overhead of the system software, which could be (yet woefully has not been)sped up dramatically with a software up rev, as the 6.2 up rev dramatically improved performance for the SD Tivos.


I was wrong when I made a comment on the IDE Bus, my mistake. Throughput is definitely not the issue. The items that matter for a drive used in a PVR are: Ability to disable the hardware error retries, spindle speed and actuator response time (seek times). As you noted, slower spindle drives are used but at a cost in performance.

Software inefficiencies are partly to blame and that is obvious when simply upgrading the software offers significant improvements in performance but I suggest that improved spindle speed and seek times would help too. Seeking to and getting the heads above the position the data you need quicker can never be anything but useful, no matter how much better the software is. For example, a 7200 or 10,000 RPM drive will help even if nothing else changes because request that need the heads away from the tracks they are recording/playing to/back will be serviced quicker simply because less time is needed to wait for data to appear under the head.


----------



## bdlucas (Feb 15, 2004)

Drestin said:


> File got slightly smaller, but playback is noticably worse.


One thing that might be going on is that a typical MPEG encoder will apply various image processing algorithms such as softening (blurring), noise reduction, etc. before doing the actual compression step. For typical source material this will provide a better tradeoff of sharpness vs artifacts. But every time you repeat this the image will get a little blurrier (and hence the file will be a little smaller). Tyrone's claim relates to just the actual compression step, not to the image processing that's done before compression. It might be interesting to see what happens if you turn these pre-processing options off for the second and subsequent applications of the MPEG encoder.


----------



## Drestin (Jan 24, 2002)

bdlucas said:


> It might be interesting to see what happens if you turn these pre-processing options off for the second and subsequent applications of the MPEG encoder.


For my admittedly quick and dirty test I used TMPGEnc and didn't enable any image processing at all. Perhaps more accurately I should say, none that I'm aware of. I don't claim to be an expert at video processing.


----------



## bdlucas (Feb 15, 2004)

Drestin said:


> For my admittedly quick and dirty test I used TMPGEnc and didn't enable any image processing at all. Perhaps more accurately I should say, none that I'm aware of. I don't claim to be an expert at video processing.


Hmm, you're right, TMPGEnc seems to have the image processing disabled by default. So it might be what rminsk said. I also seem to recall regarding JPEG compression (which MPEG resembles in part - that's the DCT part rminsk was talking about) that while in theory you could recompress without losing anything, doing so required a very carefully written encoder, which you were unlikely to find in practice. In other words, it seems plausible that Tyrone is right in theory under favorable conditions, but in practice decoding and re-encoding noticeably loses quality.


----------



## newsposter (Aug 18, 2002)

bdlucas said:


> I think you're right, depending on what you mean by "overhead of the system software". I think it's been reasonably demonstrated that the disk in some way plays a role in the slowness of the UI - the speed of the UI changes depending on whether the channels that are being recorded and/or played back are HD or SD.


big time. When I try to even record what's in the live OTA buffer, takes forever.


----------



## SpankyInChicago (May 13, 2005)

Drestin said:


> You did x10 because it's 32 bit color, I believe that is right. However, you also did x60. _Correct me if I'm wrong_ but I thought 1080i was 60 fields per second INTERLACED and therefore should be x30 frames per second transmitted.


Yeah, I did not actually write out the correct math, but the final number is still right.

You've got 1920 x 1080 x 10 x 3 x 30 x .66667 = 1.244Gbps.


----------



## Drestin (Jan 24, 2002)

SpankyInChicago said:


> Yeah, I did not actually write out the correct math, but the final number is still right.
> 
> You've got 1920 x 1080 x 10 x 3 x 30 x .66667 = 1.244Gbps.


What do the 10, 3 and .6667 values represent, please? (I assume the 30 is because we're talking about 1080i60, right?)


----------



## Dennis Wilkinson (Sep 24, 2001)

Drestin said:


> What do the 10, 3 and .6667 values represent, please? (I assume the 30 is because we're talking about 1080i60, right?)


I find it easier to think about with units applied:

60 fields/sec / 2 fields/frame * 1080 lines/frame * 1920 pixels/line * 2 samples/pixel * 10 bits/sample = 1.244Gb/sec

The "2 samples per pixel" is worth explaining, too. Uncompressed HD is Y'CbCr 4:2:2 sampled. The long story made short by that is that for every 4 pixels, you get 4 samples of Y' ('luma'), 2 samples of Cb ('blue difference chroma'), and 2 samples of Cr ('red difference chroma'). So, that's 8 total samples of mixed type per every 4 pixels, or 2 samples per pixel.

The 3 and the .66667 in the post you quoted account for the sampling, and the 10 is the bits/sample number.


----------



## rminsk (Jun 4, 2002)

Television video signals are not 4:4:4 (RGB color). They are either 4:2:2 - full luminance information and chroma horizontally subsampled by 2 or 4:2:0 - full luminance information and chroma horizontally and vertically subsampled by 2. mpeg-2 compresses to 4:2:0 signals. I'm not sure what mpeg-4 is. I also think that mpeg-2 is only 8-bit per component.


----------



## Dennis Wilkinson (Sep 24, 2001)

rminsk said:


> mpeg-2 compresses to 4:2:0 signals. I'm not sure what mpeg-4 is. I also think that mpeg-2 is only 8-bit per component.


MPEG-2 can be 4:2:2 as well -- it depends on the profile. "MPEG-4" supports both, as well as 4:4:4, but it depends on what codec you're using (Part 10/H.264/AVC supports all of them, I'm not sure about Part 2.)

You're right that MPEG-2 is 8 bits per component. 10 bits was proposed at some point (and would be the uncompressed source in many studios), but AFAIK the proposal for that extension was dropped. MPEG 4 Part 10 (H.264) includes 10- and 12-bits per component flavors.


----------



## rminsk (Jun 4, 2002)

You are right on the 4:2:2 in mpeg-2, forgot about 4:2:2 profile and High Profile. But isn't OTA using the "Main [email protected] Level" . That would be 4:2:0 right?


----------



## Dennis Wilkinson (Sep 24, 2001)

rminsk said:


> You are right on the 4:2:2 in mpeg-2, forgot about 4:2:2 profile and High Profile. But isn't OTA using the "Main [email protected] Level" . That would be 4:2:0 right?


For broadcast, yes.


----------



## TyroneShoes (Sep 6, 2004)

TyroneShoes said:


> ...for any number of identical chained algorithms, the inaccuracy does not compound due to aggregate concatenation. If you don't change the algorithms *or manipulate the coefficients in other ways*, you can do this *theoretically * an infinite number of times without incurring added impairments.
> 
> Bottom line, chained, identical A-to-D and D-to-A and MPEG compression algorithms do not *necessarily * increase impairments, so assuming that there will be further impairments can quite often be a flawed assumption...


There I go quoting (paraphrasing) myself again, which probably would not be necessary if I could just learn to edit in the first place, but...oh well.

You might have noticed that I qualified this entire exercise with the word "theoretically". Another way of saying that would be that in the real world, it often does not work out that way. And the theoretical example is true, and I stand by it, but I admit that it is difficult to do that in practice. And the reason why is that it is difficult for the analog image to _not_ be manipulated somehow in between generations. Not only that, but the D-to-A and encoding algorithms must use the exact reverse process of the A-to-D and decoding algorithms, which is also sometimes difficult or at least expensive to do, especially at the consumer level.

The first A-to-D (leaving the encoding and decoding out of the example for now) creates a series of mathematical values. The first D-to-A creates an analog image from that that represents the original analog image. Inaccurately, but hopefully precisely accurate enough.

Now, lets say its an image of you wearing your trendy grey tie, and the luminance level is a ruler-flat 80 IRE for that part of the image as measured from the original analog image. If you can maintain that 80 IRE faithfully through A-to-D, the next D-to-A and the next A-to-D, then the coefficients representing each of those grey pixels in the second gen will be the same as the coefficients representing them in the first A-to-D. That is because they will be originally quantized to a particular level, and when the resultant analog image is quantized again, the grey tie pixels are already exactly at the quantization level used the first time. Same exact coefficients, same exact (but inaccurate) math used to get there.

But what happens if the analog IRE level is not maintained, and the grey tie is attenuated to 78 IRE instead during the next digitization? New quantization level inaccurately representing 78 IRE as the level of the grey tie, new coefficients, new inaccurate math compounding the original inaccurate math, all causing further impairment of the PQ. And that is very common...it is extremely difficult to preserve every aspect of an analog signal as it traverses the production or transport chain (which is one reason we do things digitally as much as possible).

You'd likely never notice it if it were simply 80 IRE attenuated to 78 IRE, but for an image that ramps smoothly between, say 60 and 80 IRE, which is much more common, successive generations of quantization will eventually cause that smooth transition to become "stairsteppy", for lack of a better term, precisely because of the concatenation, compounded rounding errors. And analog video noise not originally filtered out will aggravate that, especially at the barrier between one quantum level and the other. Noise there, rather than simply varying between it's natural analog levels of grey, will instead jump wildly between the two fixed quantization steps, which can be very noticeable. All of a sudden, we've got added artifacts attributable to secondary generations.

In a case where a consumer digitizer and encoder are used for second gens, anything can happen. The accuracy of the secondary analog image itself might be in question if the decoder is not using the exact reverse process the encoder uses, for instance, which is common in consumer-level products. Using such a digitizer and encoder is a poor example of a theoretical truth, but a very good example of how difficult it is to approach the theoretical at a consumer level. Its not even easy at the pro level, for that matter.

And going back to the theoretical example, if you use the same encoding algorithm for both successive compression stages, they will by definition have the same level and profile, the same GOP structure, the same exact I, B and P frames representing the exact same time slices of the original analog image, etc. Again, if the decoding is the exact reverse process as the encoding, there will be no added impairments. And again, that is very difficult in the real world, and even more difficult in a garden-variety PC from Best Buy running a shareware program in someone's rec room.

For the real-world application, what that leaves us with is that the closer the successive algorithms are to each other, and the more faithfully you can preserve the analog image during the D-to-A and A-to-D process (and if you don't manipulate the digital coefficients, by for instance, digitally changing the luminance, chroma, gamma, etc., which can also only be done by concatenated math) the less degradation you will endure, all else being equal. MPEG-2 decompressed and recompressed as MPEG-4 on a signal that travels 46,000 miles through a hostile environment (and another 46,000 miles to your dish) breaks all of those rules in pretty significantly rough ways.


----------



## rminsk (Jun 4, 2002)

TyroneShoes said:


> the same exact I, B and P frames representing the exact same time slices of the original analog image, etc.


Motion estimation will not be the same between the original and the recompressed versions of the signal. The bits are gone and the correlation will be different. The B and P frames will be different.


----------



## TyroneShoes (Sep 6, 2004)

If you are using the same process and the coefficients are the same for each encoding process, including DCT, there is no way the resultant mathematical manipulation that creates these extrapolated coefficients (making up the B and P frames) could have different results, one time from the other. The bits are gone due to the first encoding. No more or less or different bits are discarded the second time around. If they were, that would be employing a different process, by definition.

The decoder estimates the missing information when creating the analog image. When that analog image is digitized a second time using the exact same process as was used the first time, the exact same redundant information is what is thrown away, both times. And the exact same information that is kept is identical, both times. And the exact same estimated information is recreated in the decoder, both times. If the process is the same and the motion vectors are the same and the original data is the same and the original math is the same, then the answer will also be the same.

Every pixel is digitized to a mathematical number. Some are kept, and transmitted directly. Others have metadata transmitted that lets them be recreated at decode just as they were originally digitized. That means that the decoder has all of the values, some transmitted and some recreated, but all of the coefficients are available just the same, just as they were before compression. This is the same set of coefficients that were available before compression, in fact. If you apply the same decode process to them, the resulting image has to be identical.


----------



## bdlucas (Feb 15, 2004)

I think rminsk is correct. "Motion estimation" in the encoder involves searching the previous frame for regions that are as similar as it can find to each macroblock in the current frame and using that as a starting point. The DCT is computed on the difference between the similar region from the previous frame and the actual region in the current frame. Since the image sequences are different after an encoding/decoding cycle the motion estimation will find different matching regions and the DCT will be computed on a different difference image.


----------



## TyroneShoes (Sep 6, 2004)

bdlucas said:


> I think rminsk is correct. "Motion estimation" in the encoder involves searching the previous frame for regions that are as similar as it can find to each macroblock in the current frame and using that as a starting point. The DCT is computed on the difference between the similar region from the previous frame and the actual region in the current frame. Since the image sequences are different after an encoding/decoding cycle the motion estimation will find different matching regions and the DCT will be computed on a different difference image.


I respectfully disagree. If the I-frame is identical, and the motion estimation that creates a successive P-frame is identical, then the P-frame will also be identical.

Remember, there are four separate _groups_ of processes here. First digitization, then compression, followed by decompression, and analog recreation. Digitization reduces infinite analog values to fixed mathematical values. That is where the original damage is done, and replicating the analog image at this point will be the same as replicating the analog image after chained generations. Secondary generations will not be faithful to the original analog image, but they will be just as faithful to the analog image that could be created just after the original digitization process. That is the first lossy part of the process.

Compression (including motion estimation under DCT) discards some of the coefficients, but sends some other, fewer and smaller coefficients (metadata) that allows the decoder to use a lookup table to recreate those unsent coefficients exactly as if they had been sent. Then the decoder uses both the sent data and the sent metadata to replicate the analog image by adding the recreated data to the sent data. That is what the analog image is created from, and it is nearly identical to the data after original digitization and before compression. The decoder might be guessing, but it is guessing nearly right. The P and B frames are the second lossy part of the process. The I-frames are sent intact, so that part of the process is not lossy, by definition.

And that is the important point here. The I-frames _are _ rereated exactly, while the _P and B frames _ are not. And the I-frames form the entire basis of secondary encoding, not the P and B frames, which are discarded during secondary encoding once again, and are replaced by identical P and B frame metadata created in the exact same manner from perfectly identical I-frames, exactly as they were the first time.

The process of analog recreation creates real analog frames. The frames representing the P and B frames are not as perfect as the original analog frames, that is certain. But when the motion estimation process is applied again to the new analog frames that perfectly represent the I-frames, the exact same motion estimation algorithm will create exact replicas of the original P and B frames, just like it did the first time.

How is this possible? because the P and B frames are created originally from the I frames, which are taken from digitizing the analog frames. And the analog frames representing the I frames _are_ exactly the same in the secondary analog copy. The analog frames created from the original P and B frames do not faithfully represent the original analog frames before compression, but the analog frames created from the original I-frames do represent the original I-frames before compression. The accurate analog frames that matter, those representing the original I-frames, are kept. The inaccurate analog frames represented by the original P and B frames were discarded the first time around, and those are exactly the frames discarded the second time around, if the process is the same. And in the example, it is.

I short, the analog recreation of the P and B frames just before the second generation is not exactly representative of the original analog frames, but it does not matter, because they are to be discarded anyway. The analog recreation of the I frames is indeed representative of the original I-frames, because that information was never discarded. The secondary process uses the same information in the same way to create the same encoded mathematical values as did the original encoding process.


----------



## Dennis Wilkinson (Sep 24, 2001)

TyroneShoes said:


> If the I-frame is identical, and the motion estimation that creates a successive P-frame is identical, then the P-frame will also be identical.


That first if is a very, very big one. There is no guarantee whatsoever that re-compressing a decompressed I frame, even with the exact same algorithm, will result in the _exact_ same compressed I frame data that you started with. It's very important not to confuse "computer math" with actual math with real numbers. We're stuck with mathematical operations in fixed or floating point numbers, intermediate storage of limited precision, and so on. This means rounding error, underflow, and so on. Even ignoring quantization, just sending data through DCT/IDCT over multiple generations will change the data on each successive generation, because we don't get infinite precision.

Some image data may eventually stabilize after several encode/decode generations, but not all will.


----------



## bdlucas (Feb 15, 2004)

TyroneShoes said:


> If the I-frame is identical, and the motion estimation that creates a successive P-frame is identical, then the P-frame will also be identical.


But the motion vectors depend on both the input I frame and the input P frame and so even if the I frame remains the same, if the P frame changes then the motion vectors will change and so the output P frame will change, as far as I can see.


----------



## SpankyInChicago (May 13, 2005)

Dennis Wilkinson said:


> I find it easier to think about with units applied:
> 
> 60 fields/sec / 2 fields/frame * 1080 lines/frame * 1920 pixels/line * 2 samples/pixel * 10 bits/sample = 1.244Gb/sec
> 
> ...


Well, you explained it much more clearly than I would have.

Anyway, I guess my point is that uncompressed HD requires massive bandwidth and the original 45Mbps mentioned by feldon earlier in this thread already represented a significant compression ratio.


----------



## rminsk (Jun 4, 2002)

TyroneShoes said:


> I respectfully disagree. If the I-frame is identical, and the motion estimation that creates a successive P-frame is identical, then the P-frame will also be identical.


 But the P-frame will not be identical because the P-frame has already been quantized and compressed. The motion vectors will be different.



> Digitization reduces infinite analog values to fixed mathematical values. That is where the original damage is done, and replicating the analog image at this point will be the same as replicating the analog image after chained generations.


OK, we have quantized the image. We have discretized the image when we captured it with our camera both spatially and quantized to discreet values. Typically this is done at 4:2:2 for D1 tape and digibeta. I am making this overly simple because the camera could be analog with the luma and chorma bandlimited to different frequncies, HDcam, D5, ... but we have discreet values spatially over a sampling grid.



> Compression (including motion estimation under DCT) discards some of the coefficients, but sends some other, fewer and smaller coefficients (metadata) that allows the decoder to use a lookup table to recreate those unsent coefficients exactly as if they had been sent. Then the decoder uses both the sent data and the sent metadata to replicate the analog image by adding the recreated data to the sent data. Then the decoder uses both the sent data and the sent metadata to replicate the analog image by adding the recreated data to the sent data. That is what the analog image is created from, and it is nearly identical to the data after original digitization and before compression. The decoder might be guessing, but it is guessing nearly right.


No the data is gone. The unsent coefficients are no where to be found. That what lossy compression is. The DCT does a pretty good job of determining which coefficients are important. The decoder uses a reconstruction filter to interpolate the data but this is only an approximation to the real signal. Also in typical broadcast profiles the signal is now 4:2:0 and no longer 4:2:2.



> The P and B frames are the second lossy part of the process. The I-frames are sent intact, so that part of the process is not lossy, by definition.


I-frames are lossy also. I-frames are just frames that do not need any of the surrounding frames to reconstruct an image, they are still compressed with a lossy compression scheme.

The rest of your argument is based on the assumption that I-frames are non-lossy compressed. I could continue to argue but it would just be the same thing over and over. Even if they were non-lossy compressed P-frames and B-frames would be and the image can not be recreated exactly as the original thus changing the motion estimation.


----------



## TyroneShoes (Sep 6, 2004)

Maybe I can reduce this to a couple of simple questions, as getting into the nuts and bolts by example seems to be confusing people. And the first question is: "regarding the cascading of two compression algorithms, if the algorithm is the same in each case, what might be different about the secondary compression of the reconstituted analog image, from the original compression of the original image?"

Obviously nothing about the process is different, so if something is different that might cause a different outcome the second time, that leaves only the difference between the original analog image and the reconstituted intermediate analog image.

Hopefully we can all agree on that much. I think we can all agree also that the original analog image is indeed different from the reconstituted image. But does that still make the final outcome of the second process any different from the result that came from the first process in the cascade? If the process is identical, well then no, it can't possibly be different, and it isn't.

Why? because although we are starting with the true original image, that image is as a first step redacted to represent the eventually reconstituted intermediate analog image perfectly as a part of the original process. The original process uses some information from the original image, and discards the rest. It then uses a set of rules to approximate the information left. The original analog information is reduced to a partly accurate, partly inaccurate representation of the original image as a part of the original compression process. This same set of coefficients is what is used to approximate the intermediate analog image.

When the process is repeated on that image, the information that is true about the reconstituted image is by definition the same as the information that is true about the original image that was used by the process the first time. When the secondary process is employed, it treats that same identical true information in the same way as the first process did, because it is the same process applied to the same true information.

The information that is not true in the intermediate analog image is discarded, just like the remaining information (that was true) that was discarded by the same process the first time regarding the original analog image. So the fact that this part of the information is true in one case and not true in the other simply does not matter, as that information is completely discarded in both cases. The process then uses the same exact rule table in the decoder to approximate the rest of the information from the true information only, and since the process is the same and the information used is the same, the information it approximates is the same which makes the resultant combined information the same as well.

That can only mean that when decoded, it will recreate an analog image identical to the image created from the first decode. It is the identical process both times, working on the identical information available each time in the same identical way, so it can only produce an identical result. The fact that the intermediate analog image is not exactly representative of the original is not relevant, and completely beside the question, because the parts of that image that are used _are _ identical to the original. The intermediate analog image will then still be identical to the analog image at the end of the two cascaded processes, which means the secondary process has no net effect regarding the outcome of the first process.

But you don't have to believe me. Hell, I'm just another self-appointed "internet expert". I have no way to prove to anyone that I actually know what I'm talking about, so you have the right to be skeptical.

So I will defer to the real experts, who seem to be able to sum all of this up in fewer words:


Jerry C. Whitaker said:


> ...Compression and decompression within the same format is not normally considered concatenation. Rather, concatenation involves changing the values of the data, forcing the compression technology to recompress the signal...for common video applications, concatenation results in artifacts and coding problems when different compression schemes are are cascaded and/or when recompression is required...Using the same compression scheme repeatedly (MPEG-2, for example) within a chain--multiple generations, if you will,--should not present problems, as long as the pictures are not manipulated (which would force the signal to be recompressed). If, on the other hand, different compression algorithms are cascaded, all bets are off...


 And this is of course exactly what I have been saying all along. The only confusing thing in Mr. Whitaker's treatise is that he very _correctly _ does not define applying a compression scheme redundantly as "compression" unless there is a further reduction of information from the original compression process, while most folks on this forum define any application of a compression scheme somewhat _incorrectly _ as implied further compression. In many cases it does imply further concatenation or alteration of the data, but in this case it does not. He also correctly defines "recompression" as an application of a secondary compression scheme that must by definition change the resultant data. Concatenation is not equal to compression, and applying compression does not necessarily imply concatenation, or any change in the data.

That means that applying a "compression scheme" is only applying "concatenation" if the data changes, and according to this expert, when you apply the identical scheme secondarily, the data does not change, concatenation does not occur by definition, and there is no net change to the resultant image. I think that's pretty much what I said earlier, although somewhat less elegantly than Mr. Whitaker. So I agree with the expert, and apparently the expert agrees with me.

And math is math, whether it is in a computer or not. All math in compression is inexact math, but it is equally inexact in all instances within two identical processes, or they would not be identical, by definition, and that inexact math gives equally inexact resultant outcomes when applied identically to identical information.


----------



## Drestin (Jan 24, 2002)

TyroneShoes said:


> So I agree with the expert, and apparently the expert agrees with me.


Ok, fine fine fine, yer smart and we're not.

So, here is another question; when would all of this apply? In other words; in what scenario, as applied to "Joe TV Watcher is looking at a DTV picture on his TV," would you have multiple compressed, decompress, recompress, decompress chains going on from source to destination **where nothing changes between decompression and recompression**

We get it when you state that if nothing changes then you can chain all these runs together without degregation. Fine. But, when would that actually be done in the real world? Why would you decompress and recompress if you weren't doing anything in the middle?

I assume that the reasons video gets decompressed somewhere then recompressed to be sent further down the line is so that *something* can occur at this inbetween point. And if this is so then you've lost your perfect chaining of unchanged data scenario and we're back to what we've all been trying to talk about in the first place.

So - when I'm watching my DTV - how often did I benefit from your perfect scenario? And how much more frequently will I be concerned about decompress-change-recompress and the associated artifiacts?


----------



## Dennis Wilkinson (Sep 24, 2001)

Drestin said:


> Ok, fine fine fine, yer smart and we're not.


I've read the same literature myself -- the scenario and document that he's referencing here still does not guarantee that the output of multiple generations of compression and decompression are _identical_, just that any errors introduced in subsequent generations are small enough that it won't matter in actual usage. It is accurate to state that the errors here are not a result of the lossy compression, though -- the errors introduced are errors in the conversions between the analog and digital domains and any errors in the conversion from the power space to the frequency space, neither of which necessarily are implemented as pairs of perfect inverse functions in the real world. But it is fair to say that these aren't "MPEG artifacts".



> We get it when you state that if nothing changes then you can chain all these runs together without degregation. Fine. But, when would that actually be done in the real world? Why would you decompress and recompress if you weren't doing anything in the middle?


Remember that what he's talking about in the "nothing (significant) changes" case is not just MPEG encode/MPEG decode chained together, but successive generations where all the encoding parameters are kept absolutely identical. While this can and does happen in broadcast (sometimes you need to decompress to switch, for example, but the only significant loss of information would occur during the switch) it's not the usual case. In a mature broadcast environment (one comparable to SD broadcast today), you might see something more like this, looking at the whole transmission chain:

Network Source->Affiliate->decompress->add local overlays*->recompress/multiplex**->"broadcast"->cableco->(rate shaping)**->set top->(decompress)->your display.

"Add local overlays" is a typical affiliate operation -- includes things like putting news alerts at the bottom of your screen, or a semitransparent channel ID bug, or school closings... you get the idea. "Broadcast" is in quotes because that's not really how most cablecos and satellite companies actually get fed programming.

I've marked the stages where manipulation of the uncompressed data occurs with *, and potentially lossy compression stages with **.

"Rate shaping" in this case refers to a family of techniques used to reduce bit rate to fit within a given bit budget. Sometimes this is "decompress then recompress to a lower bit rate", sometimes it's "undo entropy encoding, requantize, redo entropy encoding" (much lighter weight computationally since it works without requiring new motion estimation, but ugly for more than just subtle rate changes.)

Local affiliates may recompress (reducing bit rate) to do multicast (fit more than one channel in their allocated spectrum.) Cable and satellite companies may also do it to fit a larger number of channels within the same bandwidth.


----------



## TyroneShoes (Sep 6, 2004)

Drestin said:


> Ok, fine fine fine, yer smart and we're not.
> 
> So, here is another question; when would all of this apply? In other words; in what scenario, as applied to "Joe TV Watcher is looking at a DTV picture on his TV," would you have multiple compressed, decompress, recompress, decompress chains going on from source to destination **where nothing changes between decompression and recompression**
> 
> ...


Well, whether I am smart or not is not really relevant, and nothing in this thread is proof that I am smart, just that I have been either lucky enough or determined enough to be able to find the real answers which can then maybe help dispell some of the superstition surrounding these issues. I'm not trying to outsmart anyone, just trying to help enlighten people to the facts. Whether you infer from any of that that I'm either smart or just a smart-ass or something in between can't be something I can even let myself worry about. That was never the point.

I'm just another swingin' dick like everyone else here. One advantage I occasionally enjoy in this area is that this is the area I make my living in. I deal with a cornucopia of the various parameters of coding and decoding MPEG-2 (hopefully intelligently and affectively) every day and get paid to do so, and I have been dealing directly with these exact matters professionally, daily for the last 7 years of a checkered 30-year career in broadcast engineering. That doesn't make me smart, or an expert, but it might mean I can bring a point of view to this table that might be valuable to some folks who come here looking for answers.

The only reason I had to bring out the big guns this time is because there is this vexing thing called "perceived value" that colors peoples' ability to learn and accept. Folks seemed to find it difficult to accept what I as a typical anonymous forum poster was saying, so I enlisted an unimpeachable expert to help dissuade their skepticism. If you are still skeptical, then neither of us can help you past that point.

But it is also frustrating when something I have spent a lot of time and energy learning about and thinking through is dismissed out of hand by folks, many of them weekend warriors with a PC and a DVD burner, who it is obvious have a fuzzy grasp on certain things. And that is not to say that they are _not_ smart, you and many others here obviously are very bright. These are very complicated abstract concepts, and it is easy even for very intelligent people's brains to overheat a bit trying to grasp these things, especially if they have not had a continued opportunity to apply themselves to the understanding of them. That still doesn't make anyone better than anyone else, and again, that was never the point.

Back more directly to your questions, understanding what's going on in cascaded identical algorithms isn't very directly applicable to the real world, and I covered that earlier. But that also was never the point. The purpose of bringing up this subject is simply to point out that compression is not inherently bad, and that the closer you are to perfectly maintaining aspects of the data, which is nearly impossible, and the closer the chained algorithms represent each other, the less degradation that will occur. It's always a matter of degree, and this is central to the understanding of exactly why there are different degrees of degradation, which is a better mind-space to find yourself in rather than merely accepting a one-size-fits-all black-and-white assumption that any compression is always a bad thing. As we have discovered, it isn't.

That is an important point for someone who makes his living doing that, and those sorts of discussions are also important points for folks who want to know the fine points of whether they should get HD from DBS or from cable, or if they should buy a 768-rez set vs. a 1080p one. I think that's why many of us come here in the first place. Understanding the theoretical example allows folks to apply it in practical ways to how it might affect them, or their work, in the real world. Having that understanding of that concept has been invaluable to me, and maybe it can be helpful to others. Take what you want from it, leave the rest. Understanding and belief are not strict requirements, but either can be helpful.

It's a jungle out there, and the more armed you can be with exactly how and why things work the way they do, the better choices you can make regarding your HD system. I don't have all of the answers, but I have a few. And I also just have a very hard time resisting shooting down, as politely as I can (hopefully), answers that appear in these forums that might be far enough away from reality to lead folks down the wrong path. If that's unacceptable, then either slap the cuffs on me, or post somewhere else. Once a thread devolves into a dick-measuring contest, I get discouraged enough to leave, anyway. 

Peace, Brothers. See ya in the next thread.


----------



## kimsan (Jan 23, 2002)

TyroneShoes said:


> Back more directly to your questions, understanding what's going on in cascaded identical algorithms isn't very directly applicable to the real world, and I covered that earlier. But that also was never the point.


I deeply appreciate the insight and information you provide. Please take this with the intent I mean it.

Any issues and examples of increasing degradation came about simply as questions to your ideal/minimal degradation examples.

Had the dislaimer "isn't very directly applicable to the real world" been the preface, I doubt anyone would question or dispute the theoreticals. If that was bolded/italicized/colored/highlighted as the opening, I apologize for missing it. I was busy absorbing the rest of the post.

All I means to suggest is that you were speaking in theoretical terms of how good it could be in the ideal world and the challenges came from how bad it really was in the real world.

All about misunderstanding.

Now god help me if MY meaning wasn't as clear as I meant it  Peace indeed.


----------



## Drestin (Jan 24, 2002)

TyroneShoes said:


> Well, whether I am smart or not is not really relevant, and nothing in this thread is proof that I am smart... Whether you infer from any of that that I'm either smart or just a smart-ass or something in between can't be something I can even let myself worry about. That was never the point.


Actually, you are smart and I think it is proven by all you wrote, it's obviously not just copy and pasted from somewhere else. As we're all saying to each other now, I'm not trying to be a jerk towards you, I have respect for what you are saying and am trying to learn. Just some things didn't resonate with me in my own PC/DVD burner kinda way 

Thank you for all the info - I'm trying to assimilate it still but it was a wild ride


----------



## newsposter (Aug 18, 2002)

So what's better: OTA HD or DirecTV HD?


----------



## Drestin (Jan 24, 2002)

newsposter said:


> So what's better: OTA HD or DirecTV HD?


(IMHO) As it stands now, in the real world (i.e., "what will I see with mine own two eyes") it's OTA hands down. A simple A/B comparison makes it perfectly clear. OTA has more bandwidth AND less apparent MPEG artifacts visible.


----------



## newsposter (Aug 18, 2002)

Drestin said:


> (IMHO) As it stands now, in the real world (i.e., "what will I see with mine own two eyes") it's OTA hands down. A simple A/B comparison makes it perfectly clear. OTA has more bandwidth AND less apparent MPEG artifacts visible.


Thanks, because I didn't understand a word of the past few pages


----------



## bdlucas (Feb 15, 2004)

newsposter said:


> Thanks, because I didn't understand a word of the past few pages


Don't worry, the last few pages had little to do with that question. It was just waving and measuring.


----------



## feldon23 (Mar 13, 2001)

See my sig.


----------



## TyroneShoes (Sep 6, 2004)

newsposter said:


> So what's better: OTA HD or DirecTV HD?


Arrrrgh! 

(I promise to be brief).

If you go to the top of the thread, there are these four attachments, each of which is worth at least a thousand words, as is often said. From those pictures, it is pretty obvious that for at least a part of the time, there is more degradation of the HD images on DTV than there is OTA. That much is fairly indisputable.

But it is also important to make this distinction in context. The difference between OTA images vs. DTV images, as far as compression artifacts is concerned, is limited to extremely brief, nearly imperceptible frames representing rare points in time congruent with high motion. IOW, many of us will not really see a difference most of the time, and some of us will not notice that difference at all. The reduction of 1920x1080 resolution to 1280x1080 resolution is constant, on the other hand, not brief and fleeting. But whether it actually makes a difference depends on your display, the output resolution you choose for your HR10, and most importantly the achieved resolution of the original content and telecine process, which more often than not will negate any potential disadvantage. IOW, that also may not really matter.

Bottom line, it's close. Very close. It's like a Coke drinker having to drink Pepsi. You wouldn't abhor a hamburger joint with great burgers just because you can only get Pepsi, and it makes sense to decide on which content provider to use based on almost everything else but the differences between how well they present their content (because there is very little difference). It makes more sense to choose the one that can deliver it with the best access (best PVR, best wiring scheme for your home, etc.), best channel selection, best price points, best tech support, etc., almost by any criteria _but_ the minor differences between how one might look compared to the other.

If you can get OTA without killing yourself financially and literally in the process, go for it. If you can only get cable, be happy with that. If you end up with MPEG-4 from DBS, I don't think it's worth fretting over any potential difference.

(so much for promises)


----------



## TyroneShoes (Sep 6, 2004)

kimsan said:


> ...Had the dislaimer "isn't very directly applicable to the real world" been the preface, I doubt anyone would question or dispute the theoreticals...All about misunderstanding...


That's cool. Not to keep beating a dead horse (and not to put words in your mouth), but it seems that you are saying that it would have been easier to accept what was always true all along, if it were presented differently. If so, I think you just made my earlier point regarding "perceived value". The truth was always true, of course. Whether the truth is believed, or at what point does it become believable, is always an interesting question that seems to reveal a lot about the human condition.

But it's also not just a theoretical concept, it's a real phenomenon regarding the physics of MPEG compression, although it might not have much practical resonance for folks more worried about OTA vs. DTV. Maybe I should have put more effort into preventing the derailing of my reason for presenting it by folks who wanted for their own purposes to regard it as a more-practical concept than it really is. Too late.


----------



## TyroneShoes (Sep 6, 2004)

bdlucas said:


> ...It was just waving and measuring.


Even if it was, I know you well enough to know that if anyone here was hanging on every word from every poster, it was probably you.


----------



## Wolffpack (May 28, 2003)

TyroneShoes said:


> Bottom line, it's close. Very close. It's like a Coke drinker having to drink Pepsi. You wouldn't abhor a hamburger joint with great burgers just because you can only get Pepsi, and it makes sense to decide on which content provider to use based on almost everything else but the differences between how well they present their content (because there is very little difference).


I agree with most of your post, but this is a bad example. I hate Pepsi. I'll go to a great burger joint but if they don't have Coke, I go with water or Iced Tea. 

Pepsi Sucks.


----------



## good.deals (Jan 28, 2006)

feldon23 said:


> **warning: blasphemy!** Moxi is almost as good as a TiVo. I'd go for it.


I had moxi for 2 days about 3 months ago. Sounds level on the box was super low. I had to turn up my sound system to uncomfortable levels to hear anything clearly. I got mad buzzing from the level being so high. The installer and the csrs told me to use the sound through my TV and forget using my sound system. Had! I'm gonna stop using my $1200 system just so I can use their 15 hour moxi box. Forget that crap... I went to dtv and now it's all good. Btw.. the moxi box was with Adelphia, the suck cable company of the world. I hope al qaeda bombs my local Orange County office becuase they are all non-english speaking pricks that dont know ish-t about ish-t.

I had them pick up the box after the second day and they tried to charge me for the pickup. God cable sucks.


----------



## MeStinkBAD (Jul 16, 2003)

Listen, I live at least 60 miles from where they broadcast HD OTA signals. If anyone has a suggestion of picking these stations up, PLEASE TELL ME.


----------



## sandiegojoe (Jun 4, 2005)

MeStinkBAD said:


> Listen, I live at least 60 miles from where they broadcast HD OTA signals. If anyone has a suggestion of picking these stations up, PLEASE TELL ME.


60 miles of flat terrain or mountainous. I had the winegard 7200 ( i think that's what it was called) the biggest antenna they made. But since I live in a narrow valley, I had to jack it up pretty high to get a signal, and even still I just got a couple stations. When I heard the initial reviews of mpeg4 locals on D being equivalent to OTA, I sold my antenna.. it was just a pain to have this huge thing on my roof for only 2 stations.

But if you have a clear line of sight to the broadcast antennas, that may be the way to go. (google "deep fringe" "antenna" "HD") something like that.

Now maybe the PQ of Mpeg4 locals will go down once it is more common, but as long as it's better than mpeg2, I'll be happy.

Like many DTV users I can't get OTA, and am looking forward to mpeg4 based on initial PQ reviews. We'll see how long that lasts.


----------



## newsposter (Aug 18, 2002)

TS i thought the wink would be a hint but next time i'll be more explicit on my lame attempt at sarcasm. 

I heard first handed that our cable's moxi was decent, but slow, and not as friendly as tivo. I did the right thing by spending 500 on my OTA setup.


----------



## TyroneShoes (Sep 6, 2004)

Wolffpack said:


> I agree with most of your post, but this is a bad example. I hate Pepsi. I'll go to a great burger joint but if they don't have Coke, I go with water or Iced Tea.
> 
> Pepsi Sucks.


I'll give you that one (and I agree). But this also points out another important aspect of the whole DTV vs. OTA question, which is that the difference may be small, but it still may make enough difference for some people to avoid DTV, while it might not matter at all for others. (as long as we're ALL avoiding DISH  )


----------



## TyroneShoes (Sep 6, 2004)

newsposter said:


> TS i thought the wink would be a hint but next time i'll be more explicit on my lame attempt at sarcasm...


Well, how do you know I didn't pick up on that  ? After all, you're the guy who spent a fortune and tore most of your hair out during a long, painful quest for OTA just rercently (which ended in a qualified success, I'm happy to hear). I know you already answered your own "question" weeks or months ago. Maybe I just will take any opportunity to get on the soapbox, even if it means playing dumb to your sarcasm (so the attempt wasn't lame, it was effective...I was just playing along). Never give old Ty an opening, no matter how small.


----------



## good.deals (Jan 28, 2006)

newsposter said:


> Before laying out a few hundred for my OTA setup, i did look for info on the moxi to see if it was any good. My local cable does have great/more HD channels for virtually the same price as DTV. And though I was only swayed by one opinion not to go there, it was a moderator with personal experience that gave me the info I used to determine it wasn't for me... if you can't trust a mod, who can you trust


HEY! I had a moxi box for about 3 days and had them come back to take it away. The volume level was way low on the box and there was no way to raise it. I had to turn my surround sound system to near max in order to hear anything. After getting to that high sound level you get buzzing through your speakers and risk blowing them if you change channels.

Before the uninstall, I called them and they said they know of the problem but there is no fix in sight (most of their users use their TV speakers and dont have a problem).

That is the whole reason I went to DTV (2 months ago), I couldnt get a working HD DVR.

I hope adelphia gets attacked by terrorists; same crap for the past 8 years (constant problems: from high prices to unreliability with service calls to outages with my internet service).


----------



## newsposter (Aug 18, 2002)

good.deals said:


> I hope adelphia gets attacked by terrorists; same crap for the past 8 years (constant problems: from high prices to unreliability with service calls to outages with my internet service).


Well in all fairness, the previous owners did take care of their town with all your $$$$  Of course jail isn't as pretty.


----------



## timb2112 (Dec 2, 2005)

BigTuna said:


> Shoes-
> 
> You have wayyyyyyy tooooooo much time on your hands. Here's an idea.........just watch the one that looks the best, you get both of them and only pay for one of them.
> 
> ...


Big Tuna....

It's a digital signal. Distance from the tower makes no diference. You either get the signal or you don't.


----------



## timb2112 (Dec 2, 2005)

BigTuna said:


> blah...........blah.......blah.........
> 
> You both sound like witch doctors claiming to have the "magic potion" that cures all diseases.
> 
> ...


 You don't need to be a scientist to see the difference between those two pictures. What do you think? He spent his own time to lie to you? Maybe you need glasses?


----------



## TyroneShoes (Sep 6, 2004)

Just goes to show you that all fish starts to smell after about 3 days.


----------



## jvrobert (Mar 7, 2003)

timb2112 said:


> Big Tuna....
> 
> It's a digital signal. Distance from the tower makes no diference. You either get the signal or you don't.


First, I thought everyone knew OTA was better than satellite HD, assuming your local providers have devent technology and you get a decent signal. So what's the point of this thread?

Second, being a digital signal doesn't magically transform reception to either 100% or 0%. If you're far from the tower or if there are obstructions you'll lose data, it doesn't matter whether it's analog or digital. In fact, a "digital" signal is justr an analog signal interpreted differently, still an analog carrier, you can lose data.


----------



## SpankyInChicago (May 13, 2005)

jvrobert said:


> Second, being a digital signal doesn't magically transform reception to either 100% or 0%. If you're far from the tower or if there are obstructions you'll lose data, it doesn't matter whether it's analog or digital. In fact, a "digital" signal is justr an analog signal interpreted differently, still an analog carrier, you can lose data.


Incorrect.

With ATSC you are either receiving the signal or you are not. If you are receiving the signal then reception, by definition, is perfect.

You'll need to read up on 8VSB and ATSC, but there is so much data redundancy in the transmission that a complete (and perfect) signal can reproduced even with a large number of data bits missing.

You are correct. Data bits will be lost. But those lost data bits will not prevent perfect reception.

I'll now turn it over to Mr. Shoes to eloquently explain the technical details.


----------



## feldon23 (Mar 13, 2001)

jvrobert said:


> First, I thought everyone knew OTA was better than satellite HD, assuming your local providers have devent technology and you get a decent signal. So what's the point of this thread?


Silly us! And we expected $5.99/month for locals (High Def or otherwise) to look equal to or better than what we get over a $20 pair of rabbit ears.

Spanky obviously has perfect local channels, perfectly flat landscape, and perfect antennas at his house. The rest of us struggle to get a steady picture because of flaws in the implementation of 8VSB modulation, terrain issues, etc.

While you're digging up technical data to point out to the original poster, don't forget to read up about multipath. 

See my sig.


----------



## bdlucas (Feb 15, 2004)

SpankyInChicago said:


> IWith ATSC you are either receiving the signal or you are not. If you are receiving the signal then reception, by definition, is perfect.
> 
> You'll need to read up on 8VSB and ATSC, but there is so much data redundancy in the transmission that a complete (and perfect) signal can reproduced even with a large number of data bits missing.
> 
> You are correct. Data bits will be lost. But those lost data bits will not prevent perfect reception.


Spanky, you're basically correct, but there's a sense in which jvrobert is technically correct also. As the signal quality degrades, you are correct that the picture is essentially perfect up to a point, and then as signal degrades further the picture very quickly degrades to the point that it is essentially nonexistent. But there is a narrow band of signal qualities where the picture is somewhere between perfect and nonexistent - you will see varying degrees of digital picture artifacts in this band. The difference between analog and digital is that analog gradually degrades over a wide range of signal qualities, whereas digital rapidly degrades over a very narrow range of signal qualities, and is essentially perfect or non-existent on either side of this narrow range. So technically it's not really just 0% or 100%, but in practice it is essentially so because the signal quality is almost always on one side or the other of this narrow range of signal qualities over which the digital signal degrades.

Did that make sense?


----------



## TyroneShoes (Sep 6, 2004)

SpankyInChicago said:


> Incorrect.
> 
> With ATSC you are either receiving the signal or you are not. If you are receiving the signal then reception, by definition, is perfect.
> 
> ...


I think you did fine with the explanation. But some will not have been in the proper state of mind to accept your explanation, because they are probably still reeling from the first word..."Incorrect". That makes the medicine a bit hard to swallow, no matter who is "correct".


----------



## TyroneShoes (Sep 6, 2004)

bdlucas said:


> Spanky, you're basically correct, but there's a sense in which jvrobert is technically correct also...
> 
> Did that make sense?


Sure did. I think everybody's "correct". But we could actually split hairs (sudden roar of mouseclicks as posters quickly click to the next thread) and look at bd's explanation another way. It can be said that you either have lock and a perfect picture or you don't have lock and you have either no picture or garbage on the screen from momentary buffer underflow. It can be also said that these are the only two possible conditions, and that would be "correct" also, if you qualify that with the words "at any one point in time".

But bd is also "correct". Even though these are the only two possible conditions, if you take a span of time into account, you can have a scenario where reception goes from perfect to imperfect to completely absent very quickly, and back again. That is not really a third condition, it is moving from one state to the other, back and forth, very quickly. And when you are on the cusp of reception, that is exactly what happens, because you are achieving, then losing, then achieving, then losing digital lock, over and over again because the atmospheric and reflective issues will dynamically modulate the S/N ratio at the tuner front end first over and then below the threshold of +15 dB.

If you are 3 miles from the transmit tower and have a clear shot and a great antenna, you can decode close to 100% of the bits. That will give you a perfect picture. You can also be 30 miles from the towers and only get 70% of the bits due to attenuation and interference. As long as you are 15 dB above the S/N ratio, you will have enough bits to still reconstruct a perfect picture identical to the one reconstructed at 3 miles away, due to the redundancy and overhead in the 8VSB signal. So PQ itself never degrades in ATSC reception. You either have it (with perfect PQ), you don't have it, or you are on the cusp where first you have it then you don't, on and on, _ad finitum._


----------



## SpankyInChicago (May 13, 2005)

feldon23 said:


> Silly us! And we expected $5.99/month for locals (High Def or otherwise) to look equal to or better than what we get over a $20 pair of rabbit ears.
> 
> Spanky obviously has perfect local channels, perfectly flat landscape, and perfect antennas at his house. The rest of us struggle to get a steady picture because of flaws in the implementation of 8VSB modulation, terrain issues, etc.
> 
> ...


I have had my own share of multipath issues. In my mind that is a signal reception problem, not a data loss problem.

I was being pedantic, and I try not to do that, so you are correct that a perfect signal may not mean perfect reception.


----------



## SpankyInChicago (May 13, 2005)

bdlucas said:


> Did that make sense?


Yes. Well said.


----------



## TyroneShoes (Sep 6, 2004)

SpankyInChicago said:


> I have had my own share of multipath issues. In my mind that is a signal reception problem, not a data loss problem.
> 
> I was being pedantic, and I try not to do that, so you are correct that a perfect signal may not mean perfect reception.


Of course what you are receiving is packets of data. The things that can interfere with you receiving them uncorrupted are but two: too low a signal, or too much interference (assuming no transmission issues). So for ATSC reception, a loss of data is a reception problem by definition.

A perfectly-received signal, assuming it presents itself that way to the tuner input and is decoded properly and makes it unencumbered to the display, can't give you anything other than perfect reception.


----------



## topvik2 (Nov 26, 2002)

I can only get OTA HD stations in a 4 X 3 format and not the 16 X 9 format as I do on DTV. My question is this because of the broadcast format or do I need a different antenna?


----------



## HomieG (Feb 17, 2003)

topvik2 said:


> I can only get OTA HD stations in a 4 X 3 format and not the 16 X 9 format as I do on DTV. My question is this because of the broadcast format or do I need a different antenna?


It's not an "HD Station" rather it is a digital TV station. They don't have to put HD programming on their digital signal. Many still put up their SD material on the digital station, especially in smaller markets. Me thinks that's what you are seeing, SD over their DTV (digital) transmission.


----------



## topvik2 (Nov 26, 2002)

Thanks thats kinda what I thought, Do you think If I rase allot stink about not getting a true HDTV, they would allow a DNS weaver?


----------



## tucsonbill (Aug 11, 2004)

topvik2 said:


> I can only get OTA HD stations in a 4 X 3 format and not the 16 X 9 format as I do on DTV. My question is this because of the broadcast format or do I need a different antenna?


I think you need to be a little more specific. Are you talking about network primetime programming that you know is HD?


----------



## phox_mulder (Feb 23, 2006)

tucsonbill said:


> I think you need to be a little more specific. Are you talking about network primetime programming that you know is HD?


And are you recording of the digital version of the channel, I.E 2-1, 4-1, 5-2, etc,
and are you sure that it is the HD on that subchannel.

Fer Instance: Channel 5 here is the NBC affiliate.
Regular channel 5 is the analog only channel.
Then they have 5-1, 5-2, and 5-3.
5-1 is the SD upconvert, always.
5-2 is the SD upconvert during the day, HD NBC primetime during primetime.
5-3 is a constant weather channel.

Channel 2 is the CBS affiliate.
2-1 is the digital channel, SD upconvert during the day,
then only true HD during CBS primetime, and then not always, as Survivor, Amazing Race, 48 Hours and other news/reality programming isn't ever HD.
No subchannels, only the one.

phox


----------



## topvik2 (Nov 26, 2002)

I live in an area that DTV does not offer locals for both HD & SD, I got weavers for NBC, CBS & Fox, for SD DNS. DTV is saying that I also need a weaver for the HD feeds. 
The station are claiming to be broadcasting in HD on 3.1, 11.1 & 16.1. My receiver says the sound is coming in as Dolby 2.0.
My question is do you think it would be worth the effort of fighting the local station for a waiver for the true HD service?


----------



## phox_mulder (Feb 23, 2006)

topvik2 said:


> The station are claiming to be broadcasting in HD on 3.1, 11.1 & 16.1. My receiver says the sound is coming in as Dolby 2.0.
> My question is do you think it would be worth the effort of fighting the local station for a waiver for the true HD service?


HD doesn't mean 5.1 audio.
Also, HD doesn't always mean HD,
programming up to and after prime time is most likely upconverted SD, 2.0 sound all around.

Even primetime programming still has a lot of 2.0 sound shows, or the local affilliate maybe isn't capable of broadcasting 5.1 surround.

What networks are those channels affiliated with?
You only listed 3 of the 4 you'd be able to get with waivers.

phox


----------



## slydog75 (Jul 8, 2004)

Ohhh.. you mean waiver.. I was wondering what a DNS weaver was.


----------



## newsposter (Aug 18, 2002)

TyroneShoes said:


> Once DTV uses MPEG-4, the difference in quality will do one of three things: get better, get worse, or stay the same.
> 
> Let me know what you think.


I think I'd like to have the thread updated for MPEG 4 comparisons


----------



## madbeachcat (Jan 31, 2002)

newsposter said:


> I think I'd like to have the thread updated for MPEG 4 comparisons


I would too. Since all of the locals are provided in MPEG4, with the DNS service being the exception. Aren't they?


----------



## MrLatte (Oct 11, 2000)

I kept hearing that the MPEG-4 locals look the same as the OTA HD channels but I disagree. The OTA HD channels look better.

You can really see the difference in darker scenes like in Ghost Whisperer on CBS. I don't have any screenshots to upload (perhaps I should try and take some) but when you can see a difference it's usually on a background wall with a darker gradient of color and the DirecTV MPEG-4 picture has banding and some microblocking whereas the OTA (MPEG-2?) feed from the same channel (7-1 KIRO for me) has no banding. It just looks grainy but with more detail.


----------



## incog-neato (Sep 18, 2007)

Move closer? Bigger Antenna?


MeStinkBAD said:


> Listen, I live at least 60 miles from where they broadcast HD OTA signals. If anyone has a suggestion of picking these stations up, PLEASE TELL ME.


----------



## slydog75 (Jul 8, 2004)

incog-neato said:


> Move closer? Bigger Antenna?


This guys last post was nearly 6 months ago. Not to mention that he asked the question over 18 months ago.


----------

